title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
BUG: various bug fixes for DataFrame/Series construction | diff --git a/RELEASE.rst b/RELEASE.rst
index 981fa5bed257d..73f55ca6c3e29 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -22,6 +22,20 @@ Where to get it
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+**API Changes**
+
+ - Series now automatically will try to set the correct dtype based on passed datetimelike objects (datetime/Timestamp)
+ - timedelta64 are returned in appropriate cases (e.g. Series - Series, when both are datetime64)
+ - mixed datetimes and objects (GH2751_) in a constructor witll be casted correctly
+ - astype on datetimes to object are now handled (as well as NaT conversions to np.nan)
+
+**Bug fixes**
+
+ - Single element ndarrays of datetimelike objects are handled (e.g. np.array(datetime(2001,1,1,0,0))), w/o dtype being passed
+ - 0-dim ndarrays with a passed dtype are handled correctly (e.g. np.array(0.,dtype='float32'))
+
+.. _GH2751: https://github.com/pydata/pandas/issues/2751
+
pandas 0.10.1
=============
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index b8f3468f82098..133d83513041e 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -80,6 +80,23 @@ pandas provides the :func:`~pandas.core.common.isnull` and
missing by the ``isnull`` and ``notnull`` functions. ``inf`` and
``-inf`` are no longer considered missing by default.
+Datetimes
+---------
+
+For datetime64[ns] types, ``NaT`` represents missing values. This is a pseudo-native
+sentinal value that can be represented by numpy in a singular dtype (datetime64[ns]).
+Pandas objects provide intercompatibility between ``NaT`` and ``NaN``.
+
+.. ipython:: python
+
+ df2 = df.copy()
+ df2['timestamp'] = Timestamp('20120101')
+ df2
+ df2.ix[['a','c','h'],['one','timestamp']] = np.nan
+ df2
+ df2.get_dtype_counts()
+
+
Calculations with missing data
------------------------------
diff --git a/doc/source/v0.10.2.txt b/doc/source/v0.10.2.txt
new file mode 100644
index 0000000000000..cad3eccddd96e
--- /dev/null
+++ b/doc/source/v0.10.2.txt
@@ -0,0 +1,54 @@
+.. _whatsnew_0102:
+
+v0.10.2 (February ??, 2013)
+---------------------------
+
+This is a minor release from 0.10.1 and includes many new features and
+enhancements along with a large number of bug fixes. There are also a number of
+important API changes that long-time pandas users should pay close attention
+to.
+
+API changes
+~~~~~~~~~~~
+
+Datetime64[ns] columns in a DataFrame (or a Series) allow the use of ``np.nan`` to indicate a nan value, in addition to the traditional ``NaT``, or not-a-time. This allows convenient nan setting in a generic way. Furthermore datetime64 columns are created by default, when passed datetimelike objects (*this change was introduced in 0.10.1*)
+
+.. ipython:: python
+
+ df = DataFrame(randn(6,2),date_range('20010102',periods=6),columns=['A','B'])
+ df['timestamp'] = Timestamp('20010103')
+ df
+
+ # datetime64[ns] out of the box
+ df.get_dtype_counts()
+
+ # use the traditional nan, which is mapped to NaT internally
+ df.ix[2:4,['A','timestamp']] = np.nan
+ df
+
+Astype conversion on datetime64[ns] to object, implicity converts ``NaT`` to ``np.nan``
+
+
+.. ipython:: python
+
+ import datetime
+ s = Series([datetime.datetime(2001, 1, 2, 0, 0) for i in range(3)])
+ s.dtype
+ s[1] = np.nan
+ s
+ s.dtype
+ s = s.astype('O')
+ s
+ s.dtype
+
+New features
+~~~~~~~~~~~~
+
+**Enhancements**
+
+**Bug Fixes**
+
+See the `full release notes
+<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
+on GitHub for a complete list.
+
diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index 6c125c45a2599..646610ecccd88 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -16,6 +16,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: v0.10.2.txt
+
.. include:: v0.10.1.txt
.. include:: v0.10.0.txt
diff --git a/pandas/core/common.py b/pandas/core/common.py
index b3d996ffd0606..081267df86202 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -654,6 +654,20 @@ def _possibly_cast_to_datetime(value, dtype):
except:
pass
+ elif dtype is None:
+ # we might have a array (or single object) that is datetime like, and no dtype is passed
+ # don't change the value unless we find a datetime set
+ v = value
+ if not (is_list_like(v) or hasattr(v,'len')):
+ v = [ v ]
+ if len(v):
+ inferred_type = lib.infer_dtype(v)
+ if inferred_type == 'datetime':
+ try:
+ value = tslib.array_to_datetime(np.array(v))
+ except:
+ pass
+
return value
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 301ea9d28d001..a863332619acb 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4289,7 +4289,7 @@ def applymap(self, func):
# if we have a dtype == 'M8[ns]', provide boxed values
def infer(x):
- if x.dtype == 'M8[ns]':
+ if com.is_datetime64_dtype(x):
x = lib.map_infer(x, lib.Timestamp)
return lib.map_infer(x, func)
return self.apply(infer)
@@ -4980,7 +4980,7 @@ def _get_agg_axis(self, axis_num):
def _get_numeric_data(self):
if self._is_mixed_type:
num_data = self._data.get_numeric_data()
- return DataFrame(num_data, copy=False)
+ return DataFrame(num_data, index=self.index, copy=False)
else:
if (self.values.dtype != np.object_ and
not issubclass(self.values.dtype.type, np.datetime64)):
@@ -4991,7 +4991,7 @@ def _get_numeric_data(self):
def _get_bool_data(self):
if self._is_mixed_type:
bool_data = self._data.get_bool_data()
- return DataFrame(bool_data, copy=False)
+ return DataFrame(bool_data, index=self.index, copy=False)
else: # pragma: no cover
if self.values.dtype == np.bool_:
return self
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 06281e288021a..f91b1464aa4a7 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -72,17 +72,28 @@ def na_op(x, y):
def wrapper(self, other):
from pandas.core.frame import DataFrame
+ dtype = None
wrap_results = lambda x: x
lvalues, rvalues = self, other
- if (com.is_datetime64_dtype(self) and
- com.is_datetime64_dtype(other)):
+ if com.is_datetime64_dtype(self):
+
+ if not isinstance(rvalues, np.ndarray):
+ rvalues = np.array([rvalues])
+
+ # rhs is either a timedelta or a series/ndarray
+ if lib.is_timedelta_array(rvalues):
+ rvalues = np.array([ np.timedelta64(v) for v in rvalues ],dtype='timedelta64[ns]')
+ dtype = 'M8[ns]'
+ elif com.is_datetime64_dtype(rvalues):
+ dtype = 'timedelta64[ns]'
+ else:
+ raise ValueError("cannot operate on a series with out a rhs of a series/ndarray of type datetime64[ns] or a timedelta")
+
lvalues = lvalues.view('i8')
rvalues = rvalues.view('i8')
- wrap_results = lambda rs: rs.astype('timedelta64[ns]')
-
if isinstance(rvalues, Series):
lvalues = lvalues.values
rvalues = rvalues.values
@@ -91,7 +102,7 @@ def wrapper(self, other):
if self.index.equals(other.index):
name = _maybe_match_name(self, other)
return Series(wrap_results(na_op(lvalues, rvalues)),
- index=self.index, name=name)
+ index=self.index, name=name, dtype=dtype)
join_idx, lidx, ridx = self.index.join(other.index, how='outer',
return_indexers=True)
@@ -105,13 +116,13 @@ def wrapper(self, other):
arr = na_op(lvalues, rvalues)
name = _maybe_match_name(self, other)
- return Series(arr, index=join_idx, name=name)
+ return Series(arr, index=join_idx, name=name,dtype=dtype)
elif isinstance(other, DataFrame):
return NotImplemented
else:
# scalars
return Series(na_op(lvalues.values, rvalues),
- index=self.index, name=self.name)
+ index=self.index, name=self.name, dtype=dtype)
return wrapper
@@ -777,7 +788,7 @@ def astype(self, dtype):
See numpy.ndarray.astype
"""
casted = com._astype_nansafe(self.values, dtype)
- return self._constructor(casted, index=self.index, name=self.name)
+ return self._constructor(casted, index=self.index, name=self.name, dtype=casted.dtype)
def convert_objects(self, convert_dates=True):
"""
@@ -1195,7 +1206,7 @@ def tolist(self):
Overrides numpy.ndarray.tolist
"""
if com.is_datetime64_dtype(self):
- return self.astype(object).values.tolist()
+ return list(self)
return self.values.tolist()
def to_dict(self):
@@ -3083,8 +3094,12 @@ def _try_cast(arr):
raise TypeError('Cannot cast datetime64 to %s' % dtype)
else:
subarr = _try_cast(data)
- elif copy:
+ else:
+ subarr = _try_cast(data)
+
+ if copy:
subarr = data.copy()
+
elif isinstance(data, list) and len(data) > 0:
if dtype is not None:
try:
@@ -3094,12 +3109,15 @@ def _try_cast(arr):
raise
subarr = np.array(data, dtype=object, copy=copy)
subarr = lib.maybe_convert_objects(subarr)
+ subarr = com._possibly_cast_to_datetime(subarr, dtype)
else:
subarr = lib.list_to_object_array(data)
subarr = lib.maybe_convert_objects(subarr)
+ subarr = com._possibly_cast_to_datetime(subarr, dtype)
else:
subarr = _try_cast(data)
+ # scalar like
if subarr.ndim == 0:
if isinstance(data, list): # pragma: no cover
subarr = np.array(data, dtype=object)
@@ -3115,7 +3133,14 @@ def _try_cast(arr):
dtype = np.object_
if dtype is None:
- value, dtype = _dtype_from_scalar(value)
+
+ # a 1-element ndarray
+ if isinstance(value, np.ndarray):
+ dtype = value.dtype
+ value = value.item()
+ else:
+ value, dtype = _dtype_from_scalar(value)
+
subarr = np.empty(len(index), dtype=dtype)
else:
# need to possibly convert the value here
@@ -3124,6 +3149,17 @@ def _try_cast(arr):
subarr.fill(value)
else:
return subarr.item()
+
+ # the result that we want
+ elif subarr.ndim == 1:
+ if index is not None:
+
+ # a 1-element ndarray
+ if len(subarr) != len(index) and len(subarr) == 1:
+ value = subarr[0]
+ subarr = np.empty(len(index), dtype=subarr.dtype)
+ subarr.fill(value)
+
elif subarr.ndim > 1:
if isinstance(data, np.ndarray):
raise Exception('Data must be 1-dimensional')
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 41ac1b3f3480f..23e7b26afaad2 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -265,6 +265,17 @@ def is_datetime64_array(ndarray values):
return False
return True
+def is_timedelta_array(ndarray values):
+ import datetime
+ cdef int i, n = len(values)
+ if n == 0:
+ return False
+ for i in range(n):
+ if not isinstance(values[i],datetime.timedelta):
+ return False
+ return True
+
+
def is_date_array(ndarray[object] values):
cdef int i, n = len(values)
if n == 0:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 09747ba3f09f0..f32ba325ca3f5 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -47,7 +47,6 @@ def _skip_if_no_scipy():
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-
class CheckIndexing(object):
_multiprocess_can_split_ = True
@@ -6484,14 +6483,18 @@ def test_get_X_columns(self):
['a', 'e']))
def test_get_numeric_data(self):
- df = DataFrame({'a': 1., 'b': 2, 'c': 'foo'},
+
+ df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'f' : Timestamp('20010102')},
index=np.arange(10))
+ result = df.get_dtype_counts()
+ expected = Series({'int64': 1, 'float64' : 1, 'datetime64[ns]': 1, 'object' : 1})
+ assert_series_equal(result, expected)
result = df._get_numeric_data()
expected = df.ix[:, ['a', 'b']]
assert_frame_equal(result, expected)
- only_obj = df.ix[:, ['c']]
+ only_obj = df.ix[:, ['c','f']]
result = only_obj._get_numeric_data()
expected = df.ix[:, []]
assert_frame_equal(result, expected)
@@ -7367,6 +7370,36 @@ def test_as_matrix_numeric_cols(self):
values = self.frame.as_matrix(['A', 'B', 'C', 'D'])
self.assert_(values.dtype == np.float64)
+
+ def test_constructor_with_datetimes(self):
+
+ # single item
+ df = DataFrame({'A' : 1, 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime(2001,1,2,0,0) },
+ index=np.arange(10))
+ result = df.get_dtype_counts()
+ expected = Series({'int64': 1, 'datetime64[ns]': 2, 'object' : 2})
+ assert_series_equal(result, expected)
+
+ # check with ndarray construction ndim==0 (e.g. we are passing a ndim 0 ndarray with a dtype specified)
+ df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'float64' : np.array(1.,dtype='float64'),
+ 'int64' : np.array(1,dtype='int64')}, index=np.arange(10))
+ result = df.get_dtype_counts()
+ expected = Series({'int64': 2, 'float64' : 2, 'object' : 1})
+ assert_series_equal(result, expected)
+
+ # check with ndarray construction ndim>0
+ df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'float64' : np.array([1.]*10,dtype='float64'),
+ 'int64' : np.array([1]*10,dtype='int64')}, index=np.arange(10))
+ result = df.get_dtype_counts()
+ expected = Series({'int64': 2, 'float64' : 2, 'object' : 1})
+ assert_series_equal(result, expected)
+
+ # GH #2751 (construction with no index specified)
+ df = DataFrame({'a':[1,2,4,7], 'b':[1.2, 2.3, 5.1, 6.3], 'c':list('abcd'), 'd':[datetime(2000,1,1) for i in range(4)] })
+ result = df.get_dtype_counts()
+ expected = Series({'int64': 1, 'float64' : 1, 'datetime64[ns]': 1, 'object' : 1})
+ assert_series_equal(result, expected)
+
def test_constructor_frame_copy(self):
cop = DataFrame(self.frame, copy=True)
cop['A'] = 5
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 896c7dc34901f..c0f40269ab24c 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -520,6 +520,7 @@ def test_array_finalize(self):
pass
def test_fromValue(self):
+
nans = Series(np.NaN, index=self.ts.index)
self.assert_(nans.dtype == np.float_)
self.assertEqual(len(nans), len(self.ts))
@@ -530,7 +531,7 @@ def test_fromValue(self):
d = datetime.now()
dates = Series(d, index=self.ts.index)
- self.assert_(dates.dtype == np.object_)
+ self.assert_(dates.dtype == 'M8[ns]')
self.assertEqual(len(dates), len(self.ts))
def test_contains(self):
@@ -2295,8 +2296,6 @@ def test_tolist(self):
# datetime64
s = Series(self.ts.index)
rs = s.tolist()
- xp = s.astype(object).values.tolist()
- assert_almost_equal(rs, xp)
self.assertEqual(self.ts.index[0], rs[0])
def test_to_dict(self):
@@ -2620,6 +2619,23 @@ def test_astype_cast_object_int(self):
result = arr.astype(int)
self.assert_(np.array_equal(result, np.arange(1, 5)))
+ def test_astype_datetimes(self):
+ import pandas.tslib as tslib
+
+ s = Series(tslib.iNaT, dtype='M8[ns]', index=range(5))
+ s = s.astype('O')
+ self.assert_(s.dtype == np.object_)
+
+ s = Series([datetime(2001, 1, 2, 0, 0)])
+ s = s.astype('O')
+ self.assert_(s.dtype == np.object_)
+
+ s = Series([datetime(2001, 1, 2, 0, 0) for i in range(3)])
+ s[1] = np.nan
+ self.assert_(s.dtype == 'M8[ns]')
+ s = s.astype('O')
+ self.assert_(s.dtype == np.object_)
+
def test_map(self):
index, data = tm.getMixedTypeDict()
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index aa12d6142d6d8..134e272acf78e 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1254,9 +1254,6 @@ def test_append_concat(self):
def test_set_dataframe_column_ns_dtype(self):
x = DataFrame([datetime.now(), datetime.now()])
- # self.assert_(x[0].dtype == object)
-
- # x[0] = to_datetime(x[0])
self.assert_(x[0].dtype == np.dtype('M8[ns]'))
def test_groupby_count_dateparseerror(self):
@@ -2391,7 +2388,7 @@ def setUp(self):
def test_auto_conversion(self):
series = Series(list(date_range('1/1/2000', periods=10)))
- self.assert_(series.dtype == object)
+ self.assert_(series.dtype == 'M8[ns]')
def test_constructor_cant_cast_datetime64(self):
self.assertRaises(TypeError, Series,
@@ -2441,13 +2438,17 @@ def test_set_none_nan(self):
self.assert_(self.series[6] is NaT)
def test_intercept_astype_object(self):
+
+ # this test no longer makes sense as series is by default already M8[ns]
+
# Work around NumPy 1.6 bugs
- result = self.series.astype(object)
- result2 = self.series.astype('O')
- expected = Series([x for x in self.series], dtype=object)
+ #result = self.series.astype(object)
+ #result2 = self.series.astype('O')
+
+ expected = Series(self.series, dtype=object)
- assert_series_equal(result, expected)
- assert_series_equal(result2, expected)
+ #assert_series_equal(result, expected)
+ #assert_series_equal(result2, expected)
df = DataFrame({'a': self.series,
'b': np.random.randn(len(self.series))})
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index fe692350023f1..adf0d630dc8b0 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -494,7 +494,7 @@ def test_frame_no_datetime64_dtype(self):
dr = date_range('2011/1/1', '2012/1/1', freq='W-FRI')
dr_tz = dr.tz_localize('US/Eastern')
e = DataFrame({'A': 'foo', 'B': dr_tz}, index=dr)
- self.assert_(e['B'].dtype == object)
+ self.assert_(e['B'].dtype == 'M8[ns]')
def test_hongkong_tz_convert(self):
# #1673
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index bbbe090225b83..1bcf4ad9ea6a5 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -54,33 +54,47 @@ def ints_to_pydatetime(ndarray[int64_t] arr, tz=None):
if tz is not None:
if _is_utc(tz):
for i in range(n):
- pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
- result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us, tz)
+ if arr[i] == iNaT:
+ result[i] = np.nan
+ else:
+ pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
+ result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
+ dts.min, dts.sec, dts.us, tz)
elif _is_tzlocal(tz) or _is_fixed_offset(tz):
for i in range(n):
- pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
- dt = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us, tz)
- result[i] = dt + tz.utcoffset(dt)
+ if arr[i] == iNaT:
+ result[i] = np.nan
+ else:
+ pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
+ dt = datetime(dts.year, dts.month, dts.day, dts.hour,
+ dts.min, dts.sec, dts.us, tz)
+ result[i] = dt + tz.utcoffset(dt)
else:
trans = _get_transitions(tz)
deltas = _get_deltas(tz)
for i in range(n):
- # Adjust datetime64 timestamp, recompute datetimestruct
- pos = trans.searchsorted(arr[i]) - 1
- inf = tz._transition_info[pos]
- pandas_datetime_to_datetimestruct(arr[i] + deltas[pos],
- PANDAS_FR_ns, &dts)
- result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us,
- tz._tzinfos[inf])
+ if arr[i] == iNaT:
+ result[i] = np.nan
+ else:
+
+ # Adjust datetime64 timestamp, recompute datetimestruct
+ pos = trans.searchsorted(arr[i]) - 1
+ inf = tz._transition_info[pos]
+
+ pandas_datetime_to_datetimestruct(arr[i] + deltas[pos],
+ PANDAS_FR_ns, &dts)
+ result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
+ dts.min, dts.sec, dts.us,
+ tz._tzinfos[inf])
else:
for i in range(n):
- pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
- result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us)
+ if arr[i] == iNaT:
+ result[i] = np.nan
+ else:
+ pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
+ result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
+ dts.min, dts.sec, dts.us)
return result
| 0 and 1 len ndarrays not inferring dtype correctly
datetimes that are single objects not inferring dtype
mixed datetimes and objects (GH #2751), casting datetimes to object
timedelta64 creation on series subtraction (of datetime64[ns])
astype on datetimes to object are now handled (as well as NaT conversions to np.nan)
astype conversion
```
In [4]: s = pd.Series([datetime.datetime(2001, 1, 2, 0, 0) for i in range(3)])
In [5]: s.dtype
Out[5]: dtype('datetime64[ns]')
In [6]: s[1] = np.nan
In [7]: s
Out[7]:
0 2001-01-02 00:00:00
1 NaT
2 2001-01-02 00:00:00
In [8]: s.dtype
Out[8]: dtype('datetime64[ns]')
In [9]: s = s.astype('O')
In [10]: s
Out[10]:
0 2001-01-02 00:00:00
1 NaN
2 2001-01-02 00:00:00
In [11]: s.dtype
Out[11]: dtype('object')
```
construction with datetimes
```
In [7]: pd.DataFrame({'A' : 1, 'B' : 'foo', 'C' : 'bar',
'D' : pd.Timestamp("20010101"),
'E' : datetime.datetime(2001,1,2,0,0) }, index=np.arange(5)).get_dtype_counts()
Out[7]:
datetime64[ns] 2
int64 1
object 2
In [16]: pd.DataFrame({'a':[1,2,4,7], 'b':[1.2, 2.3, 5.1, 6.3],
'c':list('abcd'),
'd':[datetime.datetime(2000,1,1) for i in range(4)] }).get_dtype_counts()
Out[16]:
datetime64[ns] 1
float64 1
int64 1
object 1
```
1 len ndarrays
```
In [16]: pd.DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'float64' : np.array(1.,dtype='float64'),
'int64' : np.array(1,dtype='int64')}, index=np.arange(10)).get_dtype_counts()
Out[16]:
float64 2
int64 2
object 1
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2752 | 2013-01-25T13:55:02Z | 2013-02-10T20:39:14Z | 2013-02-10T20:39:14Z | 2014-06-16T01:01:22Z |
BUG: DataFrame.clip arguments inconsistent (close #GH 2747) | diff --git a/RELEASE.rst b/RELEASE.rst
index 981fa5bed257d..7767129de084a 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -22,6 +22,12 @@ Where to get it
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+**API Changes**
+
+ - arguments to DataFrame.clip were inconsistent to numpy and Series clipping (GH2747_)
+
+.. _GH2747: https://github.com/pydata/pandas/issues/2747
+
pandas 0.10.1
=============
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 301ea9d28d001..d625fb87e3fd5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5029,7 +5029,7 @@ def f(arr):
data = self._get_numeric_data() if numeric_only else self
return data.apply(f, axis=axis)
- def clip(self, upper=None, lower=None):
+ def clip(self, lower=None, upper=None):
"""
Trim values at input threshold(s)
@@ -5042,6 +5042,11 @@ def clip(self, upper=None, lower=None):
-------
clipped : DataFrame
"""
+
+ # GH 2747 (arguments were reversed)
+ if lower is not None and upper is not None:
+ lower, upper = min(lower,upper), max(lower,upper)
+
return self.apply(lambda x: x.clip(lower=lower, upper=upper))
def clip_upper(self, threshold):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 09747ba3f09f0..d55e7a7b65fe3 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -6470,6 +6470,22 @@ def test_clip(self):
double = self.frame.clip(upper=median, lower=median)
self.assert_(not (double.values != median).any())
+ def test_dataframe_clip(self):
+
+ # GH #2747
+ df = DataFrame(np.random.randn(1000,2))
+
+ for lb, ub in [(-1,1),(1,-1)]:
+ clipped_df = df.clip(lb, ub)
+
+ lb, ub = min(lb,ub), max(ub,lb)
+ lb_mask = df.values <= lb
+ ub_mask = df.values >= ub
+ mask = ~lb_mask & ~ub_mask
+ self.assert_((clipped_df.values[lb_mask] == lb).all() == True)
+ self.assert_((clipped_df.values[ub_mask] == ub).all() == True)
+ self.assert_((clipped_df.values[mask] == df.values[mask]).all() == True)
+
def test_get_X_columns(self):
# numeric and object columns
| arguments are inconsistent to numpy and Series.clip
| https://api.github.com/repos/pandas-dev/pandas/pulls/2750 | 2013-01-24T17:56:03Z | 2013-02-10T21:21:14Z | 2013-02-10T21:21:14Z | 2013-02-10T21:21:52Z |
Add a test case to io.test_parsers about issue #2733 | diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 6ecfca10c12c3..6e380bdd64099 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -1868,6 +1868,17 @@ def test_usecols_implicit_index_col(self):
tm.assert_frame_equal(result, expected)
+ def test_usecols_with_whitespace(self):
+ # #2733
+ data = 'a b c\n4 apple bat 5.7\n8 orange cow 10'
+
+ result = self.read_csv(StringIO(data), delim_whitespace=True,
+ usecols=('a', 'b'))
+ expected = DataFrame({'a': ['apple', 'orange'],
+ 'b': ['bat', 'cow']}, index=[4, 8])
+
+ tm.assert_frame_equal(result, expected)
+
def test_pure_python_failover(self):
data = "a,b,c\n1,2,3#ignore this!\n4,5,6#ignorethistoo"
| The test case doesn't fix the issue. Just a test about the `usecols` parameter in `read_csv` when there are whitespaces in the reading data. I'm not sure the test case is worth it. Let me know.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2748 | 2013-01-24T16:18:16Z | 2013-02-10T22:06:19Z | null | 2014-08-04T13:02:02Z |
A patch to make pandas install and use numpy if it doesn't find it on the target host | diff --git a/setup.py b/setup.py
index e96772b627a9d..dabf442a8b04f 100755
--- a/setup.py
+++ b/setup.py
@@ -85,24 +85,13 @@
" use pip or easy_install."
"\n $ pip install 'python-dateutil < 2' 'numpy'")
-try:
- import numpy as np
-except ImportError:
- nonumpy_msg = ("# numpy needed to finish setup. run:\n\n"
- " $ pip install numpy # or easy_install numpy\n")
- sys.exit(nonumpy_msg)
-
-if np.__version__ < '1.6.1':
- msg = "pandas requires NumPy >= 1.6.1 due to datetime64 dependency"
- sys.exit(msg)
-
from distutils.extension import Extension
from distutils.command.build import build
from distutils.command.sdist import sdist
-from distutils.command.build_ext import build_ext
+from distutils.command.build_ext import build_ext as _build_ext
try:
- from Cython.Distutils import build_ext
+ from Cython.Distutils import build_ext as _build_ext
# from Cython.Distutils import Extension # to get pyrex debugging symbols
cython = True
except ImportError:
@@ -110,6 +99,17 @@
from os.path import splitext, basename, join as pjoin
+
+class build_ext(_build_ext):
+ def build_extensions(self):
+ numpy_incl = pkg_resources.resource_filename('numpy', 'core/include')
+
+ for ext in self.extensions:
+ if hasattr(ext, 'include_dirs') and not numpy_incl in ext.include_dirs:
+ ext.include_dirs.append(numpy_incl)
+ _build_ext.build_extensions(self)
+
+
DESCRIPTION = ("Powerful data structures for data analysis, time series,"
"and statistics")
LONG_DESCRIPTION = """
@@ -340,10 +340,7 @@ def check_cython_extensions(self, extensions):
def build_extensions(self):
self.check_cython_extensions(self.extensions)
- self.check_extensions_list(self.extensions)
-
- for ext in self.extensions:
- self.build_extension(ext)
+ build_ext.build_extensions(self)
class CompilationCacheMixin(object):
@@ -576,7 +573,7 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
lib_depends = []
plib_depends = []
-common_include = [np.get_include(), 'pandas/src/klib', 'pandas/src']
+common_include = ['pandas/src/klib', 'pandas/src']
def pxd(name):
@@ -635,7 +632,7 @@ def pxd(name):
sparse_ext = Extension('pandas._sparse',
sources=[srcpath('sparse', suffix=suffix)],
- include_dirs=[np.get_include()],
+ include_dirs=[],
libraries=libraries)
@@ -657,7 +654,7 @@ def pxd(name):
cppsandbox_ext = Extension('pandas._cppsandbox',
language='c++',
sources=[srcpath('cppsandbox', suffix=suffix)],
- include_dirs=[np.get_include()])
+ include_dirs=[])
extensions.extend([sparse_ext, parser_ext])
| Pandas will normally fail to install if numpy is not installed on the target host. This should normally be resolved by specifying numpy as a dependency but Pandas will still require that numpy is installed first due to its need of the numpy include files.
This patch resolves this problem by changing the way the numpy include path is retrieved so that it is not necessary that numpy is first installed before pandas - they can both be installed together; numpy coming first of course.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2740 | 2013-01-23T19:30:53Z | 2013-02-10T21:53:47Z | null | 2014-06-30T05:04:02Z |
ENH: add option chop_threshold to control display of small numerical values as zero | diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 11b5afcf39a52..65531c870744d 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -120,6 +120,12 @@
Default 80
When printing wide DataFrames, this is the width of each line.
"""
+pc_chop_threshold_doc = """
+: float or None
+ Default None
+ if set to a float value, all float values smaller then the given threshold
+ will be displayed as exactly 0 by repr and friends.
+"""
with cf.config_prefix('display'):
cf.register_option('precision', 7, pc_precision_doc, validator=is_int)
@@ -146,6 +152,7 @@
validator=is_text)
cf.register_option('expand_frame_repr', True, pc_expand_repr_doc)
cf.register_option('line_width', 80, pc_line_width_doc)
+ cf.register_option('chop_threshold', None, pc_chop_threshold_doc)
tc_sim_interactive_doc = """
: boolean
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 7fc9fbccced04..3854653e3f3fd 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -1091,8 +1091,21 @@ def __init__(self, *args, **kwargs):
self.formatter = self.float_format
def _format_with(self, fmt_str):
- fmt_values = [fmt_str % x if notnull(x) else self.na_rep
- for x in self.values]
+ def _val(x, threshold):
+ if notnull(x):
+ if threshold is None or abs(x) > get_option("display.chop_threshold"):
+ return fmt_str % x
+ else:
+ if fmt_str.endswith("e"): # engineering format
+ return "0"
+ else:
+ return fmt_str % 0
+ else:
+
+ return self.na_rep
+
+ threshold = get_option("display.chop_threshold")
+ fmt_values = [ _val(x, threshold) for x in self.values]
return _trim_zeros(fmt_values, self.na_rep)
def get_result(self):
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index a14d6027361cc..710120e219860 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -100,6 +100,21 @@ def test_repr_truncation(self):
with option_context("display.max_colwidth", max_len + 2):
self.assert_('...' not in repr(df))
+ def test_repr_chop_threshold(self):
+ df = DataFrame([[0.1, 0.5],[0.5, -0.1]])
+ pd.reset_option("display.chop_threshold") # default None
+ self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1')
+
+ with option_context("display.chop_threshold", 0.2 ):
+ self.assertEqual(repr(df), ' 0 1\n0 0.0 0.5\n1 0.5 0.0')
+
+ with option_context("display.chop_threshold", 0.6 ):
+ self.assertEqual(repr(df), ' 0 1\n0 0 0\n1 0 0')
+
+ with option_context("display.chop_threshold", None ):
+ self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1')
+
+
def test_repr_should_return_str(self):
"""
http://docs.python.org/py3k/reference/datamodel.html#object.__repr__
| Doesn't cover complex numbers for now (no formatter implemented in format.py)
I think the "supress" kwd choice in numpy is not that awesome. So I borrowed `chop` from
the mathematica parlance.
#2734
| https://api.github.com/repos/pandas-dev/pandas/pulls/2739 | 2013-01-23T19:16:43Z | 2013-02-10T21:33:29Z | null | 2014-06-12T08:13:07Z |
improving docstring for swaplevel method | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e84bc3d91fe34..16ccb9916ef2a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3211,6 +3211,11 @@ def sortlevel(self, level=0, axis=0, ascending=True, inplace=False):
def swaplevel(self, i, j, axis=0):
"""
Swap levels i and j in a MultiIndex on a particular axis
+
+ Parameters
+ ----------
+ i, j : int, string (can be mixed)
+ Level of index to be swapped. Can pass level name as string.
Returns
-------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 69bde62fdae20..285e53e4c396c 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -811,6 +811,11 @@ def swaplevel(self, i, j, axis=0):
"""
Swap levels i and j in a MultiIndex on a particular axis
+ Parameters
+ ----------
+ i, j : int, string (can be mixed)
+ Level of index to be swapped. Can pass level name as string.
+
Returns
-------
swapped : type of caller (new object)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 715207f505a32..702eaeadeb937 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1898,6 +1898,11 @@ def swaplevel(self, i, j):
"""
Swap level i with level j. Do not change the ordering of anything
+ Parameters
+ ----------
+ i, j : int, string (can be mixed)
+ Level of index to be swapped. Can pass level name as string.
+
Returns
-------
swapped : MultiIndex
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 56b6e06844b86..4455a8e47f03e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2104,6 +2104,11 @@ def swaplevel(self, i, j, copy=True):
"""
Swap levels i and j in a MultiIndex
+ Parameters
+ ----------
+ i, j : int, string (can be mixed)
+ Level of index to be swapped. Can pass level name as string.
+
Returns
-------
swapped : Series
| fixing issue #2648
| https://api.github.com/repos/pandas-dev/pandas/pulls/2737 | 2013-01-23T18:33:10Z | 2013-02-04T11:59:47Z | null | 2013-02-04T11:59:47Z |
ENH: provide squeeze method for removing 1-len dimensions, close (GH #2544) | diff --git a/RELEASE.rst b/RELEASE.rst
index 981fa5bed257d..e4b69488ffe9a 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -22,6 +22,19 @@ Where to get it
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+**New features**
+
+ - added ``squeeze`` method that will reduce dimensionality of the object by squeezing any len-1 dimensions
+
+pandas 0.10.2
+=============
+
+**Release date:** 2013-02-??
+
+**New features**
+
+ - Add ``squeeze`` function to reduce dimensionality of 1-len objects
+
pandas 0.10.1
=============
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 362ef8ef7d7fb..64c0fe2c54ac1 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -819,6 +819,17 @@ For example, using the earlier example data, we could do:
wp.minor_axis
wp.minor_xs('C')
+Squeezing
+~~~~~~~~~
+
+Another way to change the dimensionality of an object is to ``squeeze`` a 1-len object, similar to ``wp['Item1']``
+
+.. ipython:: python
+
+ wp.reindex(items=['Item1']).squeeze()
+ wp.reindex(items=['Item1'],minor=['B']).squeeze()
+
+
Conversion to DataFrame
~~~~~~~~~~~~~~~~~~~~~~~
@@ -834,6 +845,7 @@ method:
minor_axis=['a', 'b', 'c', 'd'])
panel.to_frame()
+
Panel4D (Experimental)
----------------------
diff --git a/doc/source/v0.10.2.txt b/doc/source/v0.10.2.txt
new file mode 100644
index 0000000000000..6e044ef243f2d
--- /dev/null
+++ b/doc/source/v0.10.2.txt
@@ -0,0 +1,29 @@
+.. _whatsnew_0102:
+
+v0.10.2 (February ??, 2013)
+---------------------------
+
+This is a minor release from 0.10.1 and includes many new features and
+enhancements along with a large number of bug fixes. There are also a number of
+important API changes that long-time pandas users should pay close attention
+to.
+
+New features
+~~~~~~~~~~~~
+
+``Squeeze`` to possibly remove length 1 dimensions from an object.
+
+.. ipython:: python
+
+ p = Panel(randn(3,4,4),items=['ItemA','ItemB','ItemC'],
+ major_axis=date_range('20010102',periods=4),
+ minor_axis=['A','B','C','D'])
+ p
+ p.reindex(items=['ItemA']).squeeze()
+ p.reindex(items=['ItemA'],minor=['B']).squeeze()
+
+
+See the `full release notes
+<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
+on GitHub for a complete list.
+
diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index 6c125c45a2599..646610ecccd88 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -16,6 +16,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: v0.10.2.txt
+
.. include:: v0.10.1.txt
.. include:: v0.10.0.txt
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 69bde62fdae20..d2a50c086ab14 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -587,6 +587,13 @@ def pop(self, item):
del self[item]
return result
+ def squeeze(self):
+ """ squeeze length 1 dimensions """
+ try:
+ return self.ix[tuple([ slice(None) if len(a) > 1 else a[0] for a in self.axes ])]
+ except:
+ return self
+
def _expand_axes(self, key):
new_axes = []
for k, ax in zip(key, self.axes):
diff --git a/pandas/tests/test_ndframe.py b/pandas/tests/test_ndframe.py
index e017bf07039d7..2e82e43c063c1 100644
--- a/pandas/tests/test_ndframe.py
+++ b/pandas/tests/test_ndframe.py
@@ -26,6 +26,33 @@ def test_astype(self):
casted = self.ndf.astype(int)
self.assert_(casted.values.dtype == np.int64)
+ def test_squeeze(self):
+ # noop
+ for s in [ t.makeFloatSeries(), t.makeStringSeries(), t.makeObjectSeries() ]:
+ t.assert_series_equal(s.squeeze(),s)
+ for df in [ t.makeTimeDataFrame() ]:
+ t.assert_frame_equal(df.squeeze(),df)
+ for p in [ t.makePanel() ]:
+ t.assert_panel_equal(p.squeeze(),p)
+ for p4d in [ t.makePanel4D() ]:
+ t.assert_panel4d_equal(p4d.squeeze(),p4d)
+
+ # squeezing
+ df = t.makeTimeDataFrame().reindex(columns=['A'])
+ t.assert_series_equal(df.squeeze(),df['A'])
+
+ p = t.makePanel().reindex(items=['ItemA'])
+ t.assert_frame_equal(p.squeeze(),p['ItemA'])
+
+ p = t.makePanel().reindex(items=['ItemA'],minor_axis=['A'])
+ t.assert_series_equal(p.squeeze(),p.ix['ItemA',:,'A'])
+
+ p4d = t.makePanel4D().reindex(labels=['label1'])
+ t.assert_panel_equal(p4d.squeeze(),p4d['label1'])
+
+ p4d = t.makePanel4D().reindex(labels=['label1'],items=['ItemA'])
+ t.assert_frame_equal(p4d.squeeze(),p4d.ix['label1','ItemA'])
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| ```
In [1]: p = Panel(randn(3,4,4),items=['ItemA','ItemB','ItemC'],
...: major_axis=date_range('20010102',periods=4),
...: minor_axis=['A','B','C','D'])
...:
In [2]: p
Out[2]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 4 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2001-01-02 00:00:00 to 2001-01-05 00:00:00
Minor_axis axis: A to D
In [3]: p.reindex(items=['ItemA']).squeeze()
Out[3]:
A B C D
2001-01-02 -1.182806 -1.835015 -0.008415 1.665945
2001-01-03 -0.347332 -0.257820 0.167772 -1.193518
2001-01-04 -0.502643 0.055733 0.733034 -1.070536
2001-01-05 -1.900588 -1.728706 1.060564 -1.138837
In [4]: p.reindex(items=['ItemA'],minor=['B']).squeeze()
Out[4]:
2001-01-02 -1.835015
2001-01-03 -0.257820
2001-01-04 0.055733
2001-01-05 -1.728706
Freq: D, Name: B
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2734 | 2013-01-23T16:09:37Z | 2013-02-10T21:31:31Z | null | 2014-06-18T21:31:15Z |
BUG: Mismatch between get and set behavior for slices of floating indices (fixes #2727) | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 6966b0ab76aed..5cbbdfcfe100e 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -445,23 +445,27 @@ def _convert_to_indexer(self, obj, axis=0):
if isinstance(obj, slice):
ltype = labels.inferred_type
- if ltype == 'floating':
- int_slice = _is_int_slice(obj)
- else:
- # floats that are within tolerance of int used
- int_slice = _is_index_slice(obj)
+ # in case of providing all floats, use label-based indexing
+ float_slice = (labels.inferred_type == 'floating'
+ and _is_float_slice(obj))
+
+ # floats that are within tolerance of int used as positions
+ int_slice = _is_index_slice(obj)
null_slice = obj.start is None and obj.stop is None
- # could have integers in the first level of the MultiIndex
+
+ # could have integers in the first level of the MultiIndex,
+ # in which case we wouldn't want to do position-based slicing
position_slice = (int_slice
and not ltype == 'integer'
- and not isinstance(labels, MultiIndex))
+ and not isinstance(labels, MultiIndex)
+ and not float_slice)
start, stop = obj.start, obj.stop
# last ditch effort: if we are mixed and have integers
try:
- if 'mixed' in ltype and int_slice:
+ if position_slice and 'mixed' in ltype:
if start is not None:
i = labels.get_loc(start)
if stop is not None:
@@ -479,7 +483,7 @@ def _convert_to_indexer(self, obj, axis=0):
slicer = slice(i, j, obj.step)
except Exception:
if _is_index_slice(obj):
- if labels.inferred_type == 'integer':
+ if ltype == 'integer':
raise
slicer = obj
else:
@@ -547,34 +551,36 @@ def _get_slice_axis(self, slice_obj, axis=0):
axis_name = obj._get_axis_name(axis)
labels = getattr(obj, axis_name)
- int_slice = _is_index_slice(slice_obj)
-
- start = slice_obj.start
- stop = slice_obj.stop
+ ltype = labels.inferred_type
# in case of providing all floats, use label-based indexing
float_slice = (labels.inferred_type == 'floating'
and _is_float_slice(slice_obj))
+ # floats that are within tolerance of int used as positions
+ int_slice = _is_index_slice(slice_obj)
+
null_slice = slice_obj.start is None and slice_obj.stop is None
- # could have integers in the first level of the MultiIndex, in which
- # case we wouldn't want to do position-based slicing
+ # could have integers in the first level of the MultiIndex,
+ # in which case we wouldn't want to do position-based slicing
position_slice = (int_slice
- and labels.inferred_type != 'integer'
+ and not ltype == 'integer'
and not isinstance(labels, MultiIndex)
and not float_slice)
+ start, stop = slice_obj.start, slice_obj.stop
+
# last ditch effort: if we are mixed and have integers
try:
- if 'mixed' in labels.inferred_type and int_slice:
+ if position_slice and 'mixed' in ltype:
if start is not None:
i = labels.get_loc(start)
if stop is not None:
j = labels.get_loc(stop)
position_slice = False
except KeyError:
- if labels.inferred_type == 'mixed-integer-float':
+ if ltype == 'mixed-integer-float':
raise
if null_slice or position_slice:
@@ -585,7 +591,7 @@ def _get_slice_axis(self, slice_obj, axis=0):
slicer = slice(i, j, slice_obj.step)
except Exception:
if _is_index_slice(slice_obj):
- if labels.inferred_type == 'integer':
+ if ltype == 'integer':
raise
slicer = slice_obj
else:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 09747ba3f09f0..f1e203d0a58d9 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1078,6 +1078,45 @@ def test_getitem_setitem_float_labels(self):
self.assertRaises(Exception, df.ix.__getitem__, slice(1, 2))
self.assertRaises(Exception, df.ix.__setitem__, slice(1, 2), 0)
+ # #2727
+ index = Index([1.0, 2.5, 3.5, 4.5, 5.0])
+ df = DataFrame(np.random.randn(5, 5), index=index)
+
+ # positional slicing!
+ result = df.ix[1.0:5]
+ expected = df.reindex([2.5, 3.5, 4.5, 5.0])
+ assert_frame_equal(result, expected)
+ self.assertEqual(len(result), 4)
+
+ # positional again
+ result = df.ix[4:5]
+ expected = df.reindex([5.0])
+ assert_frame_equal(result, expected)
+ self.assertEqual(len(result), 1)
+
+ # label-based
+ result = df.ix[1.0:5.0]
+ expected = df.reindex([1.0, 2.5, 3.5, 4.5, 5.0])
+ assert_frame_equal(result, expected)
+ self.assertEqual(len(result), 5)
+
+ cp = df.copy()
+ # positional slicing!
+ cp.ix[1.0:5] = 0
+ self.assert_((cp.ix[1.0:5] == 0).values.all())
+ self.assert_((cp.ix[0:1] == df.ix[0:1]).values.all())
+
+ cp = df.copy()
+ # positional again
+ cp.ix[4:5] = 0
+ self.assert_((cp.ix[4:5] == 0).values.all())
+ self.assert_((cp.ix[0:4] == df.ix[0:4]).values.all())
+
+ cp = df.copy()
+ # label-based
+ cp.ix[1.0:5.0] = 0
+ self.assert_((cp.ix[1.0:5.0] == 0).values.all())
+
def test_setitem_single_column_mixed(self):
df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'],
columns=['foo', 'bar', 'baz'])
| fixes #2727: __NDFrameIndexer._convert_to_indexer_ modified to match behavior in __NDFrameIndexer._get_slice_axis_, new tests added to existing unit test
| https://api.github.com/repos/pandas-dev/pandas/pulls/2730 | 2013-01-23T01:32:00Z | 2013-01-30T17:45:53Z | null | 2013-02-08T15:17:53Z |
dataframe.convert_objects now ignores datetime64 outside supported timeperiod | diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 41ac1b3f3480f..18a6d2a0a3b94 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -370,6 +370,7 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
'''
Type inference function-- convert object array to proper dtype
'''
+ from tslib import Timestamp
cdef:
Py_ssize_t i, n
ndarray[float64_t] floats
@@ -411,7 +412,8 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
elif util.is_float_object(val):
floats[i] = complexes[i] = val
seen_float = 1
- elif util.is_datetime64_object(val):
+ elif util.is_datetime64_object(val) and \
+ Timestamp(np.iinfo(np.int64).min+1) <= val <= Timestamp(np.iinfo(np.int64).max) :
if convert_datetime:
idatetimes[i] = convert_to_tsobject(val, None).value
seen_datetime = 1
@@ -427,7 +429,8 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
elif util.is_complex_object(val):
complexes[i] = val
seen_complex = 1
- elif PyDateTime_Check(val) or util.is_datetime64_object(val):
+ elif PyDateTime_Check(val) and util.is_datetime64_object(val) and \
+ Timestamp(np.iinfo(np.int64).min+1) <= val <= Timestamp(np.iinfo(np.int64).max) :
if convert_datetime:
seen_datetime = 1
idatetimes[i] = convert_to_tsobject(val, None).value
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
index 7e5341fd5b311..d4a3450969bb6 100644
--- a/pandas/tests/test_tseries.py
+++ b/pandas/tests/test_tseries.py
@@ -334,6 +334,12 @@ def test_convert_objects_complex_number():
result = lib.maybe_convert_objects(arr)
assert(issubclass(result.dtype.type, np.complexfloating))
+def test_convert_objects_datetime64():
+ from pandas import DataFrame
+ # GH2722
+ li = [datetime(2262, 4, 12), datetime(1677, 9, 21), datetime(2000,1,1)]
+ df = DataFrame(li)
+ df.convert_objects(True) # no exception
def test_rank():
from pandas.compat.scipy import rankdata
| fixes #2722
| https://api.github.com/repos/pandas-dev/pandas/pulls/2724 | 2013-01-22T13:52:54Z | 2013-01-22T14:54:37Z | null | 2014-08-14T16:13:14Z |
lag_plot() accepts variable lags | diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 8745be6ad6a77..07d60db783d6d 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -160,6 +160,7 @@ def test_autocorrelation_plot(self):
def test_lag_plot(self):
from pandas.tools.plotting import lag_plot
_check_plot_works(lag_plot, self.ts)
+ _check_plot_works(lag_plot, self.ts, lag=5)
@slow
def test_bootstrap_plot(self):
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 165ad73ee1312..042ab3418be60 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -520,12 +520,13 @@ def random_color(column):
return ax
-def lag_plot(series, ax=None, **kwds):
+def lag_plot(series, lag=1, ax=None, **kwds):
"""Lag plot for time series.
Parameters:
-----------
series: Time series
+ lag: lag of the scatter plot, default 1
ax: Matplotlib axis object, optional
kwds: Matplotlib scatter method keyword arguments, optional
@@ -535,12 +536,12 @@ def lag_plot(series, ax=None, **kwds):
"""
import matplotlib.pyplot as plt
data = series.values
- y1 = data[:-1]
- y2 = data[1:]
+ y1 = data[:-lag]
+ y2 = data[lag:]
if ax is None:
ax = plt.gca()
ax.set_xlabel("y(t)")
- ax.set_ylabel("y(t + 1)")
+ ax.set_ylabel("y(t + %s)" % lag)
ax.scatter(y1, y2, **kwds)
return ax
| the default is still 1, so nothing should change if one wants a 'classic' lag plot
| https://api.github.com/repos/pandas-dev/pandas/pulls/2723 | 2013-01-22T13:10:30Z | 2013-02-10T21:27:11Z | null | 2014-06-20T08:05:53Z |
DOC: whatsnew example for PR #2595 | diff --git a/doc/source/v0.10.1.txt b/doc/source/v0.10.1.txt
index 8aa2dad2b35a0..a2b99bf6c4174 100644
--- a/doc/source/v0.10.1.txt
+++ b/doc/source/v0.10.1.txt
@@ -14,6 +14,21 @@ API changes
New features
~~~~~~~~~~~~
+Datetime64[ns] columns in a DataFrame (or a Series) allow the use of ``np.nan`` to indicate a nan value, in addition to the traditional ``NaT``, or not-a-time. This allows convenient nan setting in a generic way. Furthermore datetime64 columns are created by default, when using ``datetime.datetime`` or ``Timestamp`` objects, rather than default as object dtype.
+
+.. ipython:: python
+
+ df = DataFrame(randn(6,2),date_range('20010102',periods=6),columns=['A','B'])
+ df['timestamp'] = Timestamp('20010103')
+ df
+
+ # datetime64[ns] out of the box
+ df.get_dtype_counts()
+
+ # use the traditional nan, which is mapped to NaT internally
+ df.ix[2:4,['A','timestamp']] = np.nan
+ df
+
HDFStore
~~~~~~~~
@@ -59,7 +74,7 @@ You can now store ``datetime64`` in data columns
df_mixed = df.copy()
df_mixed['datetime64'] = Timestamp('20010102')
- df_mixed.ix[3:4,['A','B']] = np.nan
+ df_mixed.ix[3:4,['A','B','datetime64']] = np.nan
store.append('df_mixed', df_mixed)
df_mixed1 = store.select('df_mixed')
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 301ea9d28d001..4c3922d313684 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4287,9 +4287,9 @@ def applymap(self, func):
applied : DataFrame
"""
- # if we have a dtype == 'M8[ns]', provide boxed values
+ # if we have a datetime64 type, provide boxed values
def infer(x):
- if x.dtype == 'M8[ns]':
+ if com.is_datetime64_dtype(x):
x = lib.map_infer(x, lib.Timestamp)
return lib.map_infer(x, func)
return self.apply(infer)
| just a small whatsnew doc for the datetime64[ns] example when using np.nan to indicate NaT
| https://api.github.com/repos/pandas-dev/pandas/pulls/2712 | 2013-01-20T04:04:05Z | 2013-01-30T22:03:49Z | null | 2014-06-16T01:01:37Z |
value_counts() can now compute relative frequencies. | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 413923262c6b0..4bb990a57cb4d 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -147,7 +147,7 @@ def factorize(values, sort=False, order=None, na_sentinel=-1):
return labels, uniques
-def value_counts(values, sort=True, ascending=False):
+def value_counts(values, sort=True, ascending=False, normalize=False):
"""
Compute a histogram of the counts of non-null values
@@ -158,6 +158,8 @@ def value_counts(values, sort=True, ascending=False):
Sort by values
ascending : boolean, default False
Sort in ascending order
+ normalize: boolean, default False
+ If True then compute a relative histogram
Returns
-------
@@ -190,6 +192,9 @@ def value_counts(values, sort=True, ascending=False):
if not ascending:
result = result[::-1]
+ if normalize:
+ result = result / float(values.size)
+
return result
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 2870bb1ab05b1..c6fe396b08867 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1379,18 +1379,25 @@ def count(self, level=None):
return notnull(self.values).sum()
- def value_counts(self):
+ def value_counts(self, normalize=False):
"""
Returns Series containing counts of unique values. The resulting Series
will be in descending order so that the first element is the most
frequently-occurring element. Excludes NA values
+ Parameters
+ ----------
+ normalize: boolean, default False
+ If True then the Series returned will contain the relative
+ frequencies of the unique values.
+
Returns
-------
counts : Series
"""
from pandas.core.algorithms import value_counts
- return value_counts(self.values, sort=True, ascending=False)
+ return value_counts(self.values, sort=True, ascending=False,
+ normalize=normalize)
def unique(self):
"""
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index cef309fd59503..74b41f4ef1cd7 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2383,6 +2383,11 @@ def test_value_counts_nunique(self):
expected = Series([4, 3, 2, 1], index=['b', 'a', 'd', 'c'])
assert_series_equal(hist, expected)
+ # relative histogram.
+ hist = s.value_counts(normalize=True)
+ expected = Series([.4, .3, .2, .1], index=['b', 'a', 'd', 'c'])
+ assert_series_equal(hist, expected)
+
self.assertEquals(s.nunique(), 4)
# handle NA's properly
| https://api.github.com/repos/pandas-dev/pandas/pulls/2710 | 2013-01-19T23:49:17Z | 2013-03-14T03:57:30Z | null | 2014-06-14T03:07:33Z | |
ENH/BUG/DOC: allow propogation and coexistance of numeric dtypes | diff --git a/RELEASE.rst b/RELEASE.rst
index 981fa5bed257d..5db564176959e 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -22,6 +22,42 @@ Where to get it
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+pandas 0.10.2
+=============
+
+**Release date:** 2013-??-??
+
+**New features**
+
+ - Allow mixed dtypes (e.g ``float32/float64/int32/int16/int8``) to coexist in DataFrames and propogate in operations
+
+**Improvements to existing features**
+
+ - added ``blocks`` attribute to DataFrames, to return a dict of dtypes to homogeneously dtyped DataFrames
+ - added keyword ``convert_numeric`` to ``convert_objects()`` to try to convert object dtypes to numeric types
+ - ``convert_dates`` in ``convert_objects`` can now be ``coerce`` which will return a datetime64[ns] dtype
+ with non-convertibles set as ``NaT``; will preserve an all-nan object (e.g. strings)
+ - Series print output now includes the dtype by default
+
+**API Changes**
+
+ - Do not automatically upcast numeric specified dtypes to ``int64`` or ``float64`` (GH622_ and GH797_)
+ - Guarantee that ``convert_objects()`` for Series/DataFrame always returns a copy
+ - groupby operations will respect dtypes for numeric float operations (float32/float64); other types will be operated on,
+ and will try to cast back to the input dtype (e.g. if an int is passed, as long as the output doesn't have nans,
+ then an int will be returned)
+ - backfill/pad/take/diff/ohlc will now support ``float32/int16/int8`` operations
+ - Integer block types will upcast as needed in where operations (GH2793_)
+
+**Bug Fixes**
+
+ - Fix seg fault on empty data frame when fillna with ``pad`` or ``backfill`` (GH2778_)
+
+.. _GH622: https://github.com/pydata/pandas/issues/622
+.. _GH797: https://github.com/pydata/pandas/issues/797
+.. _GH2778: https://github.com/pydata/pandas/issues/2778
+.. _GH2793: https://github.com/pydata/pandas/issues/2793
+
pandas 0.10.1
=============
@@ -36,6 +72,7 @@ pandas 0.10.1
- Restored inplace=True behavior returning self (same object) with
deprecation warning until 0.11 (GH1893_)
- ``HDFStore``
+
- refactored HFDStore to deal with non-table stores as objects, will allow future enhancements
- removed keyword ``compression`` from ``put`` (replaced by keyword
``complib`` to be consistent across library)
@@ -49,7 +86,7 @@ pandas 0.10.1
- support data column indexing and selection, via ``data_columns`` keyword in append
- support write chunking to reduce memory footprint, via ``chunksize``
keyword to append
- - support automagic indexing via ``index`` keywork to append
+ - support automagic indexing via ``index`` keyword to append
- support ``expectedrows`` keyword in append to inform ``PyTables`` about
the expected tablesize
- support ``start`` and ``stop`` keywords in select to limit the row
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 362ef8ef7d7fb..6919f67db5b78 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -450,15 +450,101 @@ DataFrame:
df.xs('b')
df.ix[2]
-Note if a DataFrame contains columns of multiple dtypes, the dtype of the row
-will be chosen to accommodate all of the data types (dtype=object is the most
-general).
-
For a more exhaustive treatment of more sophisticated label-based indexing and
slicing, see the :ref:`section on indexing <indexing>`. We will address the
fundamentals of reindexing / conforming to new sets of lables in the
:ref:`section on reindexing <basics.reindexing>`.
+DataTypes
+~~~~~~~~~
+
+.. _dsintro.column_types:
+
+The main types stored in pandas objects are float, int, boolean, datetime64[ns],
+and object. A convenient ``dtypes`` attribute return a Series with the data type of
+each column.
+
+.. ipython:: python
+
+ df['integer'] = 1
+ df['int32'] = df['integer'].astype('int32')
+ df['float32'] = Series([1.0]*len(df),dtype='float32')
+ df['timestamp'] = Timestamp('20010102')
+ df.dtypes
+
+If a DataFrame contains columns of multiple dtypes, the dtype of the column
+will be chosen to accommodate all of the data types (dtype=object is the most
+general).
+
+The related method ``get_dtype_counts`` will return the number of columns of
+each type:
+
+.. ipython:: python
+
+ df.get_dtype_counts()
+
+Numeric dtypes will propgate and can coexist in DataFrames (starting in v0.10.2).
+If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``,
+or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
+
+.. ipython:: python
+
+ df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
+ df1
+ df1.dtypes
+ df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'),
+ B = Series(randn(8)),
+ C = Series(np.array(randn(8),dtype='uint8')) ))
+ df2
+ df2.dtypes
+
+ # here you get some upcasting
+ df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
+ df3
+ df3.dtypes
+
+ # this is lower-common-denomicator upcasting (meaning you get the dtype which can accomodate all of the types)
+ df3.values.dtype
+
+Upcasting is always according to the **numpy** rules. If two different dtypes are involved in an operation, then the more *general* one will be used as the result of the operation.
+
+DataType Conversion
+~~~~~~~~~~~~~~~~~~~
+
+You can use the ``astype`` method to convert dtypes from one to another. These *always* return a copy.
+In addition, ``convert_objects`` will attempt to *soft* conversion of any *object* dtypes, meaning that if all the objects in a Series are of the same type, the Series
+will have that dtype.
+
+.. ipython:: python
+
+ df3
+ df3.dtypes
+
+ # conversion of dtypes
+ df3.astype('float32').dtypes
+
+To force conversion of specific types of number conversion, pass ``convert_numeric = True``.
+This will force strings and numbers alike to be numbers if possible, otherwise the will be set to ``np.nan``.
+To force conversion to ``datetime64[ns]``, pass ``convert_dates = 'coerce'``.
+This will convert any datetimelike object to dates, forcing other values to ``NaT``.
+
+.. ipython:: python
+
+ # mixed type conversions
+ df3['D'] = '1.'
+ df3['E'] = '1'
+ df3.convert_objects(convert_numeric=True).dtypes
+
+ # same, but specific dtype conversion
+ df3['D'] = df3['D'].astype('float16')
+ df3['E'] = df3['E'].astype('int32')
+ df3.dtypes
+
+ # forcing date coercion
+ s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'],dtype='O')
+ s
+ s.convert_objects(convert_dates='coerce')
+
Data alignment and arithmetic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -633,26 +719,6 @@ You can also disable this feature via the ``expand_frame_repr`` option:
reset_option('expand_frame_repr')
-DataFrame column types
-~~~~~~~~~~~~~~~~~~~~~~
-
-.. _dsintro.column_types:
-
-The four main types stored in pandas objects are float, int, boolean, and
-object. A convenient ``dtypes`` attribute return a Series with the data type of
-each column:
-
-.. ipython:: python
-
- baseball.dtypes
-
-The related method ``get_dtype_counts`` will return the number of columns of
-each type:
-
-.. ipython:: python
-
- baseball.get_dtype_counts()
-
DataFrame column attribute access and IPython completion
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 33c5db2d24102..969173d0d3569 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -304,6 +304,34 @@ so that the original data can be modified without creating a copy:
df.mask(df >= 0)
+Upcasting Gotchas
+~~~~~~~~~~~~~~~~~
+
+Performing indexing operations on ``integer`` type data can easily upcast the data to ``floating``.
+The dtype of the input data will be preserved in cases where ``nans`` are not introduced (coming soon).
+
+.. ipython:: python
+
+ dfi = df.astype('int32')
+ dfi['E'] = 1
+ dfi
+ dfi.dtypes
+
+ casted = dfi[dfi>0]
+ casted
+ casted.dtypes
+
+While float dtypes are unchanged.
+
+.. ipython:: python
+
+ df2 = df.copy()
+ df2['A'] = df2['A'].astype('float32')
+ df2.dtypes
+
+ casted = df2[df2>0]
+ casted
+ casted.dtypes
Take Methods
~~~~~~~~~~~~
diff --git a/doc/source/v0.10.2.txt b/doc/source/v0.10.2.txt
new file mode 100644
index 0000000000000..d87cf86d56864
--- /dev/null
+++ b/doc/source/v0.10.2.txt
@@ -0,0 +1,95 @@
+.. _whatsnew_0102:
+
+v0.10.2 (February ??, 2013)
+---------------------------
+
+This is a minor release from 0.10.1 and includes many new features and
+enhancements along with a large number of bug fixes. There are also a number of
+important API changes that long-time pandas users should pay close attention
+to.
+
+API changes
+~~~~~~~~~~~
+
+Numeric dtypes will propgate and can coexist in DataFrames. If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``, or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
+
+**Dtype Specification**
+
+.. ipython:: python
+
+ df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
+ df1
+ df1.dtypes
+ df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'), B = Series(randn(8)), C = Series(randn(8),dtype='uint8') ))
+ df2
+ df2.dtypes
+
+ # here you get some upcasting
+ df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
+ df3
+ df3.dtypes
+
+**Dtype conversion**
+
+.. ipython:: python
+
+ # this is lower-common-denomicator upcasting (meaning you get the dtype which can accomodate all of the types)
+ df3.values.dtype
+
+ # conversion of dtypes
+ df3.astype('float32').dtypes
+
+ # mixed type conversions
+ df3['D'] = '1.'
+ df3['E'] = '1'
+ df3.convert_objects(convert_numeric=True).dtypes
+
+ # same, but specific dtype conversion
+ df3['D'] = df3['D'].astype('float16')
+ df3['E'] = df3['E'].astype('int32')
+ df3.dtypes
+
+ # forcing date coercion
+ s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1,
+ Timestamp('20010104'), '20010105'],dtype='O')
+ s.convert_objects(convert_dates='coerce')
+
+**Upcasting Gotchas**
+
+Performing indexing operations on integer type data can easily upcast the data.
+The dtype of the input data will be preserved in cases where ``nans`` are not introduced (coming soon).
+
+.. ipython:: python
+
+ dfi = df3.astype('int32')
+ dfi['D'] = dfi['D'].astype('int64')
+ dfi
+ dfi.dtypes
+
+ casted = dfi[dfi>0]
+ casted
+ casted.dtypes
+
+While float dtypes are unchanged.
+
+.. ipython:: python
+
+ df4 = df3.copy()
+ df4['A'] = df4['A'].astype('float32')
+ df4.dtypes
+
+ casted = df4[df4>0]
+ casted
+ casted.dtypes
+
+New features
+~~~~~~~~~~~~
+
+**Enhancements**
+
+**Bug Fixes**
+
+See the `full release notes
+<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
+on GitHub for a complete list.
+
diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index 6c125c45a2599..646610ecccd88 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -16,6 +16,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: v0.10.2.txt
+
.. include:: v0.10.1.txt
.. include:: v0.10.0.txt
diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index 0d7006f08111b..40c8cabe3cb9a 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -12,23 +12,35 @@ cimport util
from libc.stdlib cimport malloc, free
+from numpy cimport NPY_INT8 as NPY_int8
+from numpy cimport NPY_INT16 as NPY_int16
from numpy cimport NPY_INT32 as NPY_int32
from numpy cimport NPY_INT64 as NPY_int64
+from numpy cimport NPY_FLOAT16 as NPY_float16
from numpy cimport NPY_FLOAT32 as NPY_float32
from numpy cimport NPY_FLOAT64 as NPY_float64
+int8 = np.dtype(np.int8)
+int16 = np.dtype(np.int16)
int32 = np.dtype(np.int32)
int64 = np.dtype(np.int64)
+float16 = np.dtype(np.float16)
float32 = np.dtype(np.float32)
float64 = np.dtype(np.float64)
+cdef np.int8_t MINint8 = np.iinfo(np.int8).min
+cdef np.int16_t MINint16 = np.iinfo(np.int16).min
cdef np.int32_t MINint32 = np.iinfo(np.int32).min
cdef np.int64_t MINint64 = np.iinfo(np.int64).min
+cdef np.float16_t MINfloat16 = np.NINF
cdef np.float32_t MINfloat32 = np.NINF
cdef np.float64_t MINfloat64 = np.NINF
+cdef np.int8_t MAXint8 = np.iinfo(np.int8).max
+cdef np.int16_t MAXint16 = np.iinfo(np.int16).max
cdef np.int32_t MAXint32 = np.iinfo(np.int32).max
cdef np.int64_t MAXint64 = np.iinfo(np.int64).max
+cdef np.float16_t MAXfloat16 = np.inf
cdef np.float32_t MAXfloat32 = np.inf
cdef np.float64_t MAXfloat64 = np.inf
@@ -615,141 +627,6 @@ def rank_2d_generic(object in_arr, axis=0, ties_method='average',
# return result
-
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def diff_2d_float64(ndarray[float64_t, ndim=2] arr,
- ndarray[float64_t, ndim=2] out,
- Py_ssize_t periods, int axis):
- cdef:
- Py_ssize_t i, j, sx, sy
-
- sx, sy = (<object> arr).shape
- if arr.flags.f_contiguous:
- if axis == 0:
- if periods >= 0:
- start, stop = periods, sx
- else:
- start, stop = 0, sx + periods
- for j in range(sy):
- for i in range(start, stop):
- out[i, j] = arr[i, j] - arr[i - periods, j]
- else:
- if periods >= 0:
- start, stop = periods, sy
- else:
- start, stop = 0, sy + periods
- for j in range(start, stop):
- for i in range(sx):
- out[i, j] = arr[i, j] - arr[i, j - periods]
- else:
- if axis == 0:
- if periods >= 0:
- start, stop = periods, sx
- else:
- start, stop = 0, sx + periods
- for i in range(start, stop):
- for j in range(sy):
- out[i, j] = arr[i, j] - arr[i - periods, j]
- else:
- if periods >= 0:
- start, stop = periods, sy
- else:
- start, stop = 0, sy + periods
- for i in range(sx):
- for j in range(start, stop):
- out[i, j] = arr[i, j] - arr[i, j - periods]
-
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def diff_2d_int64(ndarray[int64_t, ndim=2] arr,
- ndarray[float64_t, ndim=2] out,
- Py_ssize_t periods, int axis):
- cdef:
- Py_ssize_t i, j, sx, sy
-
- sx, sy = (<object> arr).shape
- if arr.flags.f_contiguous:
- if axis == 0:
- if periods >= 0:
- start, stop = periods, sx
- else:
- start, stop = 0, sx + periods
- for j in range(sy):
- for i in range(start, stop):
- out[i, j] = arr[i, j] - arr[i - periods, j]
- else:
- if periods >= 0:
- start, stop = periods, sy
- else:
- start, stop = 0, sy + periods
- for j in range(start, stop):
- for i in range(sx):
- out[i, j] = arr[i, j] - arr[i, j - periods]
- else:
- if axis == 0:
- if periods >= 0:
- start, stop = periods, sx
- else:
- start, stop = 0, sx + periods
- for i in range(start, stop):
- for j in range(sy):
- out[i, j] = arr[i, j] - arr[i - periods, j]
- else:
- if periods >= 0:
- start, stop = periods, sy
- else:
- start, stop = 0, sy + periods
- for i in range(sx):
- for j in range(start, stop):
- out[i, j] = arr[i, j] - arr[i, j - periods]
-
-
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def diff_2d_int32(ndarray[int64_t, ndim=2] arr,
- ndarray[float64_t, ndim=2] out,
- Py_ssize_t periods, int axis):
- cdef:
- Py_ssize_t i, j, sx, sy
-
- sx, sy = (<object> arr).shape
- if arr.flags.f_contiguous:
- if axis == 0:
- if periods >= 0:
- start, stop = periods, sx
- else:
- start, stop = 0, sx + periods
- for j in range(sy):
- for i in range(start, stop):
- out[i, j] = arr[i, j] - arr[i - periods, j]
- else:
- if periods >= 0:
- start, stop = periods, sy
- else:
- start, stop = 0, sy + periods
- for j in range(start, stop):
- for i in range(sx):
- out[i, j] = arr[i, j] - arr[i, j - periods]
- else:
- if axis == 0:
- if periods >= 0:
- start, stop = periods, sx
- else:
- start, stop = 0, sx + periods
- for i in range(start, stop):
- for j in range(sy):
- out[i, j] = arr[i, j] - arr[i - periods, j]
- else:
- if periods >= 0:
- start, stop = periods, sy
- else:
- start, stop = 0, sy + periods
- for i in range(sx):
- for j in range(start, stop):
- out[i, j] = arr[i, j] - arr[i, j - periods]
-
-
# Cython implementations of rolling sum, mean, variance, skewness,
# other statistical moment functions
#
@@ -1931,161 +1808,9 @@ def groupsort_indexer(ndarray[int64_t] index, Py_ssize_t ngroups):
return result, counts
# TODO: aggregate multiple columns in single pass
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_add(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, count
- ndarray[float64_t, ndim=2] sumx, nobs
-
- nobs = np.zeros_like(out)
- sumx = np.zeros_like(out)
-
- N, K = (<object> values).shape
-
- if K > 1:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- sumx[lab, j] += val
- else:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[lab, 0] += 1
- sumx[lab, 0] += val
-
- for i in range(len(counts)):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = sumx[i, j]
-
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_prod(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, count
- ndarray[float64_t, ndim=2] prodx, nobs
-
- nobs = np.zeros_like(out)
- prodx = np.ones_like(out)
-
- N, K = (<object> values).shape
-
- if K > 1:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- prodx[lab, j] *= val
- else:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[lab, 0] += 1
- prodx[lab, 0] *= val
-
- for i in range(len(counts)):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = prodx[i, j]
-
#----------------------------------------------------------------------
# first, nth, last
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_nth(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels, int64_t rank):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, count
- ndarray[float64_t, ndim=2] resx
- ndarray[int64_t, ndim=2] nobs
-
- nobs = np.zeros((<object> out).shape, dtype=np.int64)
- resx = np.empty_like(out)
-
- N, K = (<object> values).shape
-
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- if nobs[lab, j] == rank:
- resx[lab, j] = val
-
- for i in range(len(counts)):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = resx[i, j]
-
@cython.boundscheck(False)
@cython.wraparound(False)
def group_nth_object(ndarray[object, ndim=2] out,
@@ -2130,52 +1855,6 @@ def group_nth_object(ndarray[object, ndim=2] out,
else:
out[i, j] = resx[i, j]
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_nth_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins, int64_t rank):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, count
- ndarray[float64_t, ndim=2] resx, nobs
-
- nobs = np.zeros_like(out)
- resx = np.empty_like(out)
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
-
- N, K = (<object> values).shape
-
- b = 0
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- if nobs[b, j] == rank:
- resx[b, j] = val
-
- for i in range(ngroups):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = resx[i, j]
-
@cython.boundscheck(False)
@cython.wraparound(False)
def group_nth_bin_object(ndarray[object, ndim=2] out,
@@ -2224,47 +1903,6 @@ def group_nth_bin_object(ndarray[object, ndim=2] out,
else:
out[i, j] = resx[i, j]
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_last(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, count
- ndarray[float64_t, ndim=2] resx
- ndarray[int64_t, ndim=2] nobs
-
- nobs = np.zeros((<object> out).shape, dtype=np.int64)
- resx = np.empty_like(out)
-
- N, K = (<object> values).shape
-
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- resx[lab, j] = val
-
- for i in range(len(counts)):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = resx[i, j]
-
@cython.boundscheck(False)
@cython.wraparound(False)
def group_last_object(ndarray[object, ndim=2] out,
@@ -2307,52 +1945,6 @@ def group_last_object(ndarray[object, ndim=2] out,
else:
out[i, j] = resx[i, j]
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_last_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, count
- ndarray[float64_t, ndim=2] resx, nobs
-
- nobs = np.zeros_like(out)
- resx = np.empty_like(out)
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
-
- N, K = (<object> values).shape
-
- b = 0
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- resx[b, j] = val
-
- for i in range(ngroups):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = resx[i, j]
-
@cython.boundscheck(False)
@cython.wraparound(False)
def group_last_bin_object(ndarray[object, ndim=2] out,
@@ -2400,183 +1992,15 @@ def group_last_bin_object(ndarray[object, ndim=2] out,
else:
out[i, j] = resx[i, j]
-#----------------------------------------------------------------------
-# group_min, group_max
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_min(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, count
- ndarray[float64_t, ndim=2] minx, nobs
-
- nobs = np.zeros_like(out)
-
- minx = np.empty_like(out)
- minx.fill(np.inf)
-
- N, K = (<object> values).shape
-
- if K > 1:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- if val < minx[lab, j]:
- minx[lab, j] = val
- else:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[lab, 0] += 1
- if val < minx[lab, 0]:
- minx[lab, 0] = val
-
- for i in range(len(counts)):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = minx[i, j]
-
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_max(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, count
- ndarray[float64_t, ndim=2] maxx, nobs
-
- nobs = np.zeros_like(out)
-
- maxx = np.empty_like(out)
- maxx.fill(-np.inf)
-
- N, K = (<object> values).shape
-
- if K > 1:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- if val > maxx[lab, j]:
- maxx[lab, j] = val
- else:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[lab, 0] += 1
- if val > maxx[lab, 0]:
- maxx[lab, 0] = val
-
- for i in range(len(counts)):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = maxx[i, j]
-
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_mean(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, count
- ndarray[float64_t, ndim=2] sumx, nobs
-
- nobs = np.zeros_like(out)
- sumx = np.zeros_like(out)
-
- N, K = (<object> values).shape
-
- if K > 1:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
- # not nan
- if val == val:
- nobs[lab, j] += 1
- sumx[lab, j] += val
- else:
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- val = values[i, 0]
- # not nan
- if val == val:
- nobs[lab, 0] += 1
- sumx[lab, 0] += val
-
- for i in range(len(counts)):
- for j in range(K):
- count = nobs[i, j]
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = sumx[i, j] / count
-
-
-def group_median(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
+#----------------------------------------------------------------------
+# median
+
+def group_median(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
'''
Only aggregates on axis=0
'''
@@ -2642,497 +2066,5 @@ cdef inline float64_t _median_linear(float64_t* a, int n):
return result
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_var(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
- cdef:
- Py_ssize_t i, j, N, K, lab
- float64_t val, ct
- ndarray[float64_t, ndim=2] nobs, sumx, sumxx
-
- nobs = np.zeros_like(out)
- sumx = np.zeros_like(out)
- sumxx = np.zeros_like(out)
-
- N, K = (<object> values).shape
-
- if K > 1:
- for i in range(N):
-
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
-
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- sumx[lab, j] += val
- sumxx[lab, j] += val * val
- else:
- for i in range(N):
-
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- val = values[i, 0]
- # not nan
- if val == val:
- nobs[lab, 0] += 1
- sumx[lab, 0] += val
- sumxx[lab, 0] += val * val
-
-
- for i in range(len(counts)):
- for j in range(K):
- ct = nobs[i, j]
- if ct < 2:
- out[i, j] = nan
- else:
- out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
- (ct * ct - ct))
-# add passing bin edges, instead of labels
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_add_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b, nbins
- float64_t val, count
- ndarray[float64_t, ndim=2] sumx, nobs
-
- nobs = np.zeros_like(out)
- sumx = np.zeros_like(out)
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
- N, K = (<object> values).shape
-
- b = 0
- if K > 1:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- sumx[b, j] += val
- else:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[b, 0] += 1
- sumx[b, 0] += val
-
- for i in range(ngroups):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = sumx[i, j]
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_prod_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, count
- ndarray[float64_t, ndim=2] prodx, nobs
-
- nobs = np.zeros_like(out)
- prodx = np.ones_like(out)
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
- N, K = (<object> values).shape
-
- b = 0
- if K > 1:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- prodx[b, j] *= val
- else:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[b, 0] += 1
- prodx[b, 0] *= val
-
- for i in range(ngroups):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = prodx[i, j]
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_min_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, count
- ndarray[float64_t, ndim=2] minx, nobs
-
- nobs = np.zeros_like(out)
-
- minx = np.empty_like(out)
- minx.fill(np.inf)
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
-
- N, K = (<object> values).shape
-
- b = 0
- if K > 1:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- if val < minx[b, j]:
- minx[b, j] = val
- else:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[b, 0] += 1
- if val < minx[b, 0]:
- minx[b, 0] = val
-
- for i in range(ngroups):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = minx[i, j]
-
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_max_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, count
- ndarray[float64_t, ndim=2] maxx, nobs
-
- nobs = np.zeros_like(out)
- maxx = np.empty_like(out)
- maxx.fill(-np.inf)
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
-
- N, K = (<object> values).shape
-
- b = 0
- if K > 1:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- if val > maxx[b, j]:
- maxx[b, j] = val
- else:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[b, 0] += 1
- if val > maxx[b, 0]:
- maxx[b, 0] = val
-
- for i in range(ngroups):
- for j in range(K):
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = maxx[i, j]
-
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_ohlc(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
- '''
- Only aggregates on axis=0
- '''
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, count
- float64_t vopen, vhigh, vlow, vclose, NA
- bint got_first = 0
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
-
- N, K = (<object> values).shape
-
- if out.shape[1] != 4:
- raise ValueError('Output array must have 4 columns')
-
- NA = np.nan
-
- b = 0
- if K > 1:
- raise NotImplementedError
- else:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- if not got_first:
- out[b, 0] = NA
- out[b, 1] = NA
- out[b, 2] = NA
- out[b, 3] = NA
- else:
- out[b, 0] = vopen
- out[b, 1] = vhigh
- out[b, 2] = vlow
- out[b, 3] = vclose
- b += 1
- got_first = 0
-
- counts[b] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- if not got_first:
- got_first = 1
- vopen = val
- vlow = val
- vhigh = val
- else:
- if val < vlow:
- vlow = val
- if val > vhigh:
- vhigh = val
- vclose = val
-
- if not got_first:
- out[b, 0] = NA
- out[b, 1] = NA
- out[b, 2] = NA
- out[b, 3] = NA
- else:
- out[b, 0] = vopen
- out[b, 1] = vhigh
- out[b, 2] = vlow
- out[b, 3] = vclose
-
-
-# @cython.boundscheck(False)
-# @cython.wraparound(False)
-def group_mean_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, count
- ndarray[float64_t, ndim=2] sumx, nobs
-
- nobs = np.zeros_like(out)
- sumx = np.zeros_like(out)
-
- N, K = (<object> values).shape
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
-
- b = 0
- if K > 1:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- sumx[b, j] += val
- else:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[b, 0] += 1
- sumx[b, 0] += val
-
- for i in range(ngroups):
- for j in range(K):
- count = nobs[i, j]
- if nobs[i, j] == 0:
- out[i, j] = nan
- else:
- out[i, j] = sumx[i, j] / count
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def group_var_bin(ndarray[float64_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] bins):
-
- cdef:
- Py_ssize_t i, j, N, K, ngroups, b
- float64_t val, ct
- ndarray[float64_t, ndim=2] nobs, sumx, sumxx
-
- nobs = np.zeros_like(out)
- sumx = np.zeros_like(out)
- sumxx = np.zeros_like(out)
-
- if bins[len(bins) - 1] == len(values):
- ngroups = len(bins)
- else:
- ngroups = len(bins) + 1
-
- N, K = (<object> values).shape
-
- b = 0
- if K > 1:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
-
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[b, j] += 1
- sumx[b, j] += val
- sumxx[b, j] += val * val
- else:
- for i in range(N):
- while b < ngroups - 1 and i >= bins[b]:
- b += 1
-
- counts[b] += 1
- val = values[i, 0]
-
- # not nan
- if val == val:
- nobs[b, 0] += 1
- sumx[b, 0] += val
- sumxx[b, 0] += val * val
-
- for i in range(ngroups):
- for j in range(K):
- ct = nobs[i, j]
- if ct < 2:
- out[i, j] = nan
- else:
- out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
- (ct * ct - ct))
-
include "join.pyx"
include "generated.pyx"
diff --git a/pandas/core/common.py b/pandas/core/common.py
index b3d996ffd0606..c99fd87f7a643 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -256,6 +256,9 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
_take1d_dict = {
'float64': algos.take_1d_float64,
+ 'float32': algos.take_1d_float32,
+ 'int8': algos.take_1d_int8,
+ 'int16': algos.take_1d_int16,
'int32': algos.take_1d_int32,
'int64': algos.take_1d_int64,
'object': algos.take_1d_object,
@@ -266,6 +269,9 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
_take2d_axis0_dict = {
'float64': algos.take_2d_axis0_float64,
+ 'float32': algos.take_2d_axis0_float32,
+ 'int8': algos.take_2d_axis0_int8,
+ 'int16': algos.take_2d_axis0_int16,
'int32': algos.take_2d_axis0_int32,
'int64': algos.take_2d_axis0_int64,
'object': algos.take_2d_axis0_object,
@@ -276,6 +282,9 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
_take2d_axis1_dict = {
'float64': algos.take_2d_axis1_float64,
+ 'float32': algos.take_2d_axis1_float32,
+ 'int8': algos.take_2d_axis1_int8,
+ 'int16': algos.take_2d_axis1_int16,
'int32': algos.take_2d_axis1_int32,
'int64': algos.take_2d_axis1_int64,
'object': algos.take_2d_axis1_object,
@@ -286,6 +295,9 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
_take2d_multi_dict = {
'float64': algos.take_2d_multi_float64,
+ 'float32': algos.take_2d_multi_float32,
+ 'int8': algos.take_2d_multi_int8,
+ 'int16': algos.take_2d_multi_int16,
'int32': algos.take_2d_multi_int32,
'int64': algos.take_2d_multi_int64,
'object': algos.take_2d_multi_object,
@@ -294,6 +306,8 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
na_override=tslib.iNaT),
}
+_dtypes_no_na = set(['int8','int16','int32', 'int64', 'bool'])
+_dtypes_na = set(['float32', 'float64', 'object', 'datetime64[ns]'])
def _get_take2d_function(dtype_str, axis=0):
if axis == 0:
@@ -319,7 +333,7 @@ def take_1d(arr, indexer, out=None, fill_value=np.nan):
out_passed = out is not None
take_f = _take1d_dict.get(dtype_str)
- if dtype_str in ('int32', 'int64', 'bool'):
+ if dtype_str in _dtypes_no_na:
try:
if out is None:
out = np.empty(n, dtype=arr.dtype)
@@ -337,7 +351,7 @@ def take_1d(arr, indexer, out=None, fill_value=np.nan):
out.dtype)
out = _maybe_upcast(out)
np.putmask(out, mask, fill_value)
- elif dtype_str in ('float64', 'object', 'datetime64[ns]'):
+ elif dtype_str in _dtypes_na:
if out is None:
out = np.empty(n, dtype=arr.dtype)
take_f(arr, _ensure_int64(indexer), out=out, fill_value=fill_value)
@@ -360,7 +374,7 @@ def take_2d_multi(arr, row_idx, col_idx, fill_value=np.nan, out=None):
out_shape = len(row_idx), len(col_idx)
- if dtype_str in ('int32', 'int64', 'bool'):
+ if dtype_str in _dtypes_no_na:
row_mask = row_idx == -1
col_mask = col_idx == -1
needs_masking = row_mask.any() or col_mask.any()
@@ -376,7 +390,7 @@ def take_2d_multi(arr, row_idx, col_idx, fill_value=np.nan, out=None):
_ensure_int64(col_idx), out=out,
fill_value=fill_value)
return out
- elif dtype_str in ('float64', 'object', 'datetime64[ns]'):
+ elif dtype_str in _dtypes_na:
if out is None:
out = np.empty(out_shape, dtype=arr.dtype)
take_f = _get_take2d_function(dtype_str, axis='multi')
@@ -405,7 +419,7 @@ def take_2d(arr, indexer, out=None, mask=None, needs_masking=None, axis=0,
if not isinstance(indexer, np.ndarray):
indexer = np.array(indexer, dtype=np.int64)
- if dtype_str in ('int32', 'int64', 'bool'):
+ if dtype_str in _dtypes_no_na:
if mask is None:
mask = indexer == -1
needs_masking = mask.any()
@@ -423,7 +437,7 @@ def take_2d(arr, indexer, out=None, mask=None, needs_masking=None, axis=0,
take_f = _get_take2d_function(dtype_str, axis=axis)
take_f(arr, _ensure_int64(indexer), out=out, fill_value=fill_value)
return out
- elif dtype_str in ('float64', 'object', 'datetime64[ns]'):
+ elif dtype_str in _dtypes_na:
if out is None:
out = np.empty(out_shape, dtype=arr.dtype)
take_f = _get_take2d_function(dtype_str, axis=axis)
@@ -457,8 +471,11 @@ def mask_out_axis(arr, mask, axis, fill_value=np.nan):
_diff_special = {
'float64': algos.diff_2d_float64,
+ 'float32': algos.diff_2d_float32,
'int64': algos.diff_2d_int64,
- 'int32': algos.diff_2d_int32
+ 'int32': algos.diff_2d_int32,
+ 'int16': algos.diff_2d_int16,
+ 'int8': algos.diff_2d_int8,
}
@@ -548,14 +565,18 @@ def wrapper(arr, mask, limit=None):
def pad_1d(values, limit=None, mask=None):
+
+ dtype = values.dtype.name
+ _method = None
if is_float_dtype(values):
- _method = algos.pad_inplace_float64
+ _method = getattr(algos,'pad_inplace_%s' % dtype,None)
elif is_datetime64_dtype(values):
_method = _pad_1d_datetime
elif values.dtype == np.object_:
_method = algos.pad_inplace_object
- else: # pragma: no cover
- raise ValueError('Invalid dtype for padding')
+
+ if _method is None:
+ raise ValueError('Invalid dtype for pad_1d [%s]' % dtype)
if mask is None:
mask = isnull(values)
@@ -564,14 +585,18 @@ def pad_1d(values, limit=None, mask=None):
def backfill_1d(values, limit=None, mask=None):
+
+ dtype = values.dtype.name
+ _method = None
if is_float_dtype(values):
- _method = algos.backfill_inplace_float64
+ _method = getattr(algos,'backfill_inplace_%s' % dtype,None)
elif is_datetime64_dtype(values):
_method = _backfill_1d_datetime
elif values.dtype == np.object_:
_method = algos.backfill_inplace_object
- else: # pragma: no cover
- raise ValueError('Invalid dtype for padding')
+
+ if _method is None:
+ raise ValueError('Invalid dtype for backfill_1d [%s]' % dtype)
if mask is None:
mask = isnull(values)
@@ -581,14 +606,18 @@ def backfill_1d(values, limit=None, mask=None):
def pad_2d(values, limit=None, mask=None):
+
+ dtype = values.dtype.name
+ _method = None
if is_float_dtype(values):
- _method = algos.pad_2d_inplace_float64
+ _method = getattr(algos,'pad_2d_inplace_%s' % dtype,None)
elif is_datetime64_dtype(values):
_method = _pad_2d_datetime
elif values.dtype == np.object_:
_method = algos.pad_2d_inplace_object
- else: # pragma: no cover
- raise ValueError('Invalid dtype for padding')
+
+ if _method is None:
+ raise ValueError('Invalid dtype for pad_2d [%s]' % dtype)
if mask is None:
mask = isnull(values)
@@ -602,14 +631,18 @@ def pad_2d(values, limit=None, mask=None):
def backfill_2d(values, limit=None, mask=None):
+
+ dtype = values.dtype.name
+ _method = None
if is_float_dtype(values):
- _method = algos.backfill_2d_inplace_float64
+ _method = getattr(algos,'backfill_2d_inplace_%s' % dtype,None)
elif is_datetime64_dtype(values):
_method = _backfill_2d_datetime
elif values.dtype == np.object_:
_method = algos.backfill_2d_inplace_object
- else: # pragma: no cover
- raise ValueError('Invalid dtype for padding')
+
+ if _method is None:
+ raise ValueError('Invalid dtype for backfill_2d [%s]' % dtype)
if mask is None:
mask = isnull(values)
@@ -633,10 +666,43 @@ def _consensus_name_attr(objs):
# Lots of little utilities
-def _possibly_cast_to_datetime(value, dtype):
+def _possibly_convert_objects(values, convert_dates=True, convert_numeric=True):
+ """ if we have an object dtype, try to coerce dates and/or numers """
+
+ if values.dtype == np.object_ and convert_dates:
+
+ # we take an aggressive stance and convert to datetime64[ns]
+ if convert_dates == 'coerce':
+ new_values = _possibly_cast_to_datetime(values, 'M8[ns]', coerce = True)
+
+ # if we are all nans then leave me alone
+ if not isnull(new_values).all():
+ values = new_values
+
+ else:
+ values = lib.maybe_convert_objects(values, convert_datetime=convert_dates)
+
+ if values.dtype == np.object_ and convert_numeric:
+ try:
+ new_values = lib.maybe_convert_numeric(values,set(),coerce_numeric=True)
+
+ # if we are all nans then leave me alone
+ if not isnull(new_values).all():
+ values = new_values
+
+ except:
+ pass
+
+ return values
+
+
+def _possibly_cast_to_datetime(value, dtype, coerce = False):
""" try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
- if dtype == 'M8[ns]':
+ if isinstance(dtype, basestring):
+ dtype = np.dtype(dtype)
+
+ if dtype is not None and is_datetime64_dtype(dtype):
if np.isscalar(value):
if value == tslib.iNaT or isnull(value):
value = tslib.iNaT
@@ -650,7 +716,7 @@ def _possibly_cast_to_datetime(value, dtype):
# we have an array of datetime & nulls
elif np.prod(value.shape):
try:
- value = tslib.array_to_datetime(value)
+ value = tslib.array_to_datetime(value, coerce = coerce)
except:
pass
@@ -1001,6 +1067,8 @@ def _is_int_or_datetime_dtype(arr_or_dtype):
def is_datetime64_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype.type
+ elif isinstance(arr_or_dtype, type):
+ tipo = np.dtype(arr_or_dtype).type
else:
tipo = arr_or_dtype.dtype.type
return issubclass(tipo, np.datetime64)
@@ -1026,13 +1094,17 @@ def _is_sequence(x):
return False
_ensure_float64 = algos.ensure_float64
+_ensure_float32 = algos.ensure_float32
_ensure_int64 = algos.ensure_int64
_ensure_int32 = algos.ensure_int32
+_ensure_int16 = algos.ensure_int16
+_ensure_int8 = algos.ensure_int8
_ensure_platform_int = algos.ensure_platform_int
_ensure_object = algos.ensure_object
-def _astype_nansafe(arr, dtype):
+def _astype_nansafe(arr, dtype, copy = True):
+ """ return a view if copy is False """
if not isinstance(dtype, np.dtype):
dtype = np.dtype(dtype)
@@ -1048,7 +1120,9 @@ def _astype_nansafe(arr, dtype):
# work around NumPy brokenness, #1987
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
- return arr.astype(dtype)
+ if copy:
+ return arr.astype(dtype)
+ return arr.view(dtype)
def _clean_fill_method(method):
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 7fc9fbccced04..88b729349ca60 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -66,10 +66,11 @@
class SeriesFormatter(object):
def __init__(self, series, buf=None, header=True, length=True,
- na_rep='NaN', name=False, float_format=None):
+ na_rep='NaN', name=False, float_format=None, dtype=True):
self.series = series
self.buf = buf if buf is not None else StringIO(u"")
self.name = name
+ self.dtype = dtype
self.na_rep = na_rep
self.length = length
self.header = header
@@ -98,6 +99,12 @@ def _get_footer(self):
footer += ', '
footer += 'Length: %d' % len(self.series)
+ if self.dtype:
+ if getattr(self.series.dtype,'name',None):
+ if footer:
+ footer += ', '
+ footer += 'Dtype: %s' % com.pprint_thing(self.series.dtype.name)
+
return unicode(footer)
def _get_formatted_index(self):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 301ea9d28d001..efb478df014ae 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -312,7 +312,10 @@ def f(self, other):
elif isinstance(other, Series):
return self._combine_series_infer(other, func)
else:
- return self._combine_const(other, func)
+
+ # straight boolean comparisions we want to allow all columns
+ # (regardless of dtype to pass thru)
+ return self._combine_const(other, func, raise_on_error = False).fillna(True).astype(bool)
f.__name__ = name
@@ -327,6 +330,7 @@ class DataFrame(NDFrame):
_auto_consolidate = True
_verbose_info = True
_het_axis = 1
+ _info_axis = 'columns'
_col_klass = Series
_AXIS_NUMBERS = {
@@ -1004,6 +1008,12 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
arr_columns.append(k)
arrays.append(v)
+ # reorder according to the columns
+ if len(columns) and len(arr_columns):
+ indexer = _ensure_index(arr_columns).get_indexer(columns)
+ arr_columns = _ensure_index([ arr_columns[i] for i in indexer ])
+ arrays = [ arrays[i] for i in indexer ]
+
elif isinstance(data, (np.ndarray, DataFrame)):
arrays, columns = _to_arrays(data, columns)
if columns is not None:
@@ -1650,38 +1660,25 @@ def info(self, verbose=True, buf=None, max_cols=None):
def dtypes(self):
return self.apply(lambda x: x.dtype)
- def convert_objects(self, convert_dates=True):
+ def convert_objects(self, convert_dates=True, convert_numeric=True):
"""
Attempt to infer better dtype for object columns
+ Always returns a copy (even if no object columns)
+
+ Parameters
+ ----------
+ convert_dates : if True, attempt to soft convert_dates, if 'coerce', force conversion (and non-convertibles get NaT)
+ convert_numeric : if True attempt to coerce to numerbers (including strings), non-convertibles get NaN
Returns
-------
converted : DataFrame
"""
- new_data = {}
- convert_f = lambda x: lib.maybe_convert_objects(
- x, convert_datetime=convert_dates)
-
- # TODO: could be more efficient taking advantage of the block
- for col, s in self.iteritems():
- if s.dtype == np.object_:
- new_data[col] = convert_f(s)
- else:
- new_data[col] = s
-
- return DataFrame(new_data, index=self.index, columns=self.columns)
+ return self._constructor(self._data.convert(convert_dates=convert_dates, convert_numeric=convert_numeric))
def get_dtype_counts(self):
- counts = {}
- for i in range(len(self.columns)):
- series = self.icol(i)
- # endianness can cause dtypes to look different
- dtype_str = str(series.dtype)
- if dtype_str in counts:
- counts[dtype_str] += 1
- else:
- counts[dtype_str] = 1
- return Series(counts)
+ """ return the counts of dtypes in this frame """
+ return Series(dict([ (dtype, len(df.columns)) for dtype, df in self.blocks.iteritems() ]))
#----------------------------------------------------------------------
# properties for index and columns
@@ -1695,6 +1692,14 @@ def as_matrix(self, columns=None):
are presented in sorted order unless a specific list of columns is
provided.
+ NOTE: the dtype will be a lower-common-denominator dtype (implicit upcasting)
+ that is to say if the dtypes (even of numeric types) are mixed, the one that accomodates all will be chosen
+ use this with care if you are not dealing with the blocks
+
+ e.g. if the dtypes are float16,float32 -> float32
+ float16,float32,float64 -> float64
+ int32,uint8 -> int32
+
Parameters
----------
columns : array-like
@@ -1711,6 +1716,33 @@ def as_matrix(self, columns=None):
values = property(fget=as_matrix)
+ def as_blocks(self, columns=None):
+ """
+ Convert the frame to a dict of dtype -> DataFrames that each has a homogeneous dtype.
+ are presented in sorted order unless a specific list of columns is
+ provided.
+
+ NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
+
+ Parameters
+ ----------
+ columns : array-like
+ Specific column order
+
+ Returns
+ -------
+ values : a list of DataFrames
+ """
+ self._consolidate_inplace()
+
+ bd = dict()
+ for b in self._data.blocks:
+ b = b.reindex_items_from(columns or b.items)
+ bd[str(b.dtype)] = DataFrame(BlockManager([ b ], [ b.items, self.index ]))
+ return bd
+
+ blocks = property(fget=as_blocks)
+
def transpose(self):
"""
Returns a DataFrame with the rows/columns switched. If the DataFrame is
@@ -1964,7 +1996,7 @@ def __getitem__(self, key):
return self._getitem_multilevel(key)
elif isinstance(key, DataFrame):
if key.values.dtype == bool:
- return self.where(key)
+ return self.where(key, try_cast = False)
else:
raise ValueError('Cannot index using non-boolean DataFrame')
else:
@@ -3330,17 +3362,12 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
raise NotImplementedError()
return self.T.fillna(method=method, limit=limit).T
- new_blocks = []
method = com._clean_fill_method(method)
- for block in self._data.blocks:
- if block._can_hold_na:
- newb = block.interpolate(method, axis=axis,
- limit=limit, inplace=inplace)
- else:
- newb = block if inplace else block.copy()
- new_blocks.append(newb)
-
- new_data = BlockManager(new_blocks, self._data.axes)
+ new_data = self._data.interpolate(method = method,
+ axis = axis,
+ limit = limit,
+ inplace = inplace,
+ coerce = True)
else:
if method is not None:
raise ValueError('cannot specify both a fill method and value')
@@ -3443,8 +3470,8 @@ def replace(self, to_replace, value=None, method='pad', axis=0,
'in length. Expecting %d got %d ' %
(len(to_replace), len(value)))
- new_data = self._data if inplace else self.copy()._data
- new_data._replace_list(to_replace, value)
+ new_data = self._data.replace_list(to_replace, value,
+ inplace=inplace)
else: # [NA, ''] -> 0
new_data = self._data.replace(to_replace, value,
@@ -3489,13 +3516,13 @@ def _interpolate(self, to_replace, method, axis, inplace, limit):
return rs if not inplace else None
else:
- new_blocks = []
- for block in self._data.blocks:
- newb = block.interpolate(method, axis=axis,
- limit=limit, inplace=inplace,
- missing=to_replace)
- new_blocks.append(newb)
- new_data = BlockManager(new_blocks, self._data.axes)
+
+ new_data = self._data.interpolate(method = method,
+ axis = axis,
+ limit = limit,
+ inplace = inplace,
+ missing = to_replace,
+ coerce = False)
if inplace:
self._data = new_data
@@ -3668,22 +3695,15 @@ def _combine_match_columns(self, other, func, fill_value=None):
if fill_value is not None:
raise NotImplementedError
- return self._constructor(func(left.values, right.values),
- index=self.index,
- columns=left.columns, copy=False)
+ new_data = left._data.where(func, right, axes = [left.columns, self.index])
+ return self._constructor(new_data)
- def _combine_const(self, other, func):
+ def _combine_const(self, other, func, raise_on_error = True):
if self.empty:
return self
- result_values = func(self.values, other)
-
- if not isinstance(result_values, np.ndarray):
- raise TypeError('Could not compare %s with DataFrame values'
- % repr(other))
-
- return self._constructor(result_values, index=self.index,
- columns=self.columns, copy=False)
+ new_data = self._data.where(func, other, raise_on_error=raise_on_error)
+ return self._constructor(new_data)
def _compare_frame(self, other, func):
if not self._indexed_same(other):
@@ -4012,8 +4032,7 @@ def diff(self, periods=1):
-------
diffed : DataFrame
"""
- new_blocks = [b.diff(periods) for b in self._data.blocks]
- new_data = BlockManager(new_blocks, [self.columns, self.index])
+ new_data = self._data.diff(periods)
return self._constructor(new_data)
def shift(self, periods=1, freq=None, **kwds):
@@ -4047,21 +4066,9 @@ def shift(self, periods=1, freq=None, **kwds):
if isinstance(offset, basestring):
offset = datetools.to_offset(offset)
- def _shift_block(blk, indexer):
- new_values = blk.values.take(indexer, axis=1)
- # convert integer to float if necessary. need to do a lot more than
- # that, handle boolean etc also
- new_values = com._maybe_upcast(new_values)
- if periods > 0:
- new_values[:, :periods] = NA
- else:
- new_values[:, periods:] = NA
- return make_block(new_values, blk.items, blk.ref_items)
-
if offset is None:
indexer = com._shift_indexer(len(self), periods)
- new_blocks = [_shift_block(b, indexer) for b in self._data.blocks]
- new_data = BlockManager(new_blocks, [self.columns, self.index])
+ new_data = self._data.shift(indexer, periods)
elif isinstance(self.index, PeriodIndex):
orig_offset = datetools.to_offset(self.index.freq)
if offset == orig_offset:
@@ -5211,7 +5218,7 @@ def combineMult(self, other):
"""
return self.mul(other, fill_value=1.)
- def where(self, cond, other=NA, inplace=False):
+ def where(self, cond, other=NA, inplace=False, try_cast=False, raise_on_error=True):
"""
Return a DataFrame with the same shape as self and whose corresponding
entries are from self where cond is True and otherwise are from other.
@@ -5220,6 +5227,10 @@ def where(self, cond, other=NA, inplace=False):
----------
cond: boolean DataFrame or array
other: scalar or DataFrame
+ inplace: perform the operation in place on the data
+ try_cast: try to cast the result back to the input type (if possible), defaults to False
+ raise_on_error: should I raise on invalid data types (e.g. trying to where on strings),
+ defaults to True
Returns
-------
@@ -5231,7 +5242,7 @@ def where(self, cond, other=NA, inplace=False):
if isinstance(cond, np.ndarray):
if cond.shape != self.shape:
- raise ValueError('Array onditional must be same shape as self')
+ raise ValueError('Array conditional must be same shape as self')
cond = self._constructor(cond, index=self.index,
columns=self.columns)
@@ -5247,12 +5258,23 @@ def where(self, cond, other=NA, inplace=False):
if isinstance(other, DataFrame):
_, other = self.align(other, join='left', fill_value=NA)
+ elif isinstance(other,np.ndarray):
+
+ if other.shape[0] != len(self.index) or other.shape[1] != len(self.columns):
+ raise ValueError('other must be the same shape as self when an ndarray')
+ other = DataFrame(other,self.index,self.columns)
if inplace:
- np.putmask(self.values, cond, other)
+
+ # we may have different type blocks come out of putmask, so reconstruct the block manager
+ self._data = self._data.putmask(cond,other,inplace=True)
+
else:
- rs = np.where(cond, self, other)
- return self._constructor(rs, self.index, self.columns)
+
+ func = lambda values, others, conds: np.where(conds, values, others)
+ new_data = self._data.where(func, other, cond, raise_on_error=raise_on_error, try_cast=try_cast)
+
+ return self._constructor(new_data)
def mask(self, cond):
"""
@@ -5609,7 +5631,6 @@ def _homogenize(data, index, dtype=None):
return homogenized
-
def _from_nested_dict(data):
# TODO: this should be seriously cythonized
new_data = OrderedDict()
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 69bde62fdae20..0f78ddb3ca48f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -486,19 +486,23 @@ def __init__(self, data, axes=None, copy=False, dtype=None):
object.__setattr__(self, '_data', data)
object.__setattr__(self, '_item_cache', {})
- def astype(self, dtype):
+ def astype(self, dtype, copy = True, raise_on_error = True):
"""
Cast object to input numpy.dtype
+ Return a copy when copy = True (be really careful with this!)
Parameters
----------
dtype : numpy.dtype or Python type
+ raise_on_error : raise on invalid input
Returns
-------
casted : type of caller
"""
- return self._constructor(self._data, dtype=dtype)
+
+ mgr = self._data.astype(dtype, copy = copy, raise_on_error = raise_on_error)
+ return self._constructor(mgr)
@property
def _constructor(self):
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 705e1574efe06..7fedb8011122a 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -425,6 +425,36 @@ def picker(arr):
return np.nan
return self.agg(picker)
+ def _try_cast(self, result, obj):
+ """ try to cast the result to our obj original type,
+ we may have roundtripped thru object in the mean-time """
+ try:
+ if obj.ndim > 1:
+ dtype = obj.values.dtype
+ else:
+ dtype = obj.dtype
+
+ if _is_numeric_dtype(dtype):
+
+ # need to respect a non-number here (e.g. Decimal)
+ if len(result) and issubclass(type(result[0]),(np.number,float,int)):
+ if issubclass(dtype.type, (np.integer, np.bool_)):
+
+ # castable back to an int/bool as we don't have nans
+ if com.notnull(result).all():
+ result = result.astype(dtype)
+ else:
+
+ result = result.astype(dtype)
+
+ elif issubclass(dtype.type, np.datetime64):
+ if is_datetime64_dtype(obj.dtype):
+ result = result.astype(obj.dtype)
+ except:
+ pass
+
+ return result
+
def _cython_agg_general(self, how, numeric_only=True):
output = {}
for name, obj in self._iterate_slices():
@@ -449,7 +479,7 @@ def _python_agg_general(self, func, *args, **kwargs):
for name, obj in self._iterate_slices():
try:
result, counts = self.grouper.agg_series(obj, f)
- output[name] = result
+ output[name] = self._try_cast(result, obj)
except TypeError:
continue
@@ -457,9 +487,16 @@ def _python_agg_general(self, func, *args, **kwargs):
return self._python_apply_general(f)
if self.grouper._filter_empty_groups:
+
mask = counts.ravel() > 0
for name, result in output.iteritems():
- output[name] = result[mask]
+
+ # since we are masking, make sure that we have a float object
+ values = result
+ if _is_numeric_dtype(values.dtype):
+ values = com.ensure_float(values)
+
+ output[name] = self._try_cast(values[mask],result)
return self._wrap_aggregated_output(output)
@@ -708,21 +745,16 @@ def get_group_levels(self):
# Aggregation functions
_cython_functions = {
- 'add': _algos.group_add,
- 'prod': _algos.group_prod,
- 'min': _algos.group_min,
- 'max': _algos.group_max,
- 'mean': _algos.group_mean,
- 'median': _algos.group_median,
- 'var': _algos.group_var,
- 'std': _algos.group_var,
- 'first': lambda a, b, c, d: _algos.group_nth(a, b, c, d, 1),
- 'last': _algos.group_last
- }
-
- _cython_object_functions = {
- 'first': lambda a, b, c, d: _algos.group_nth_object(a, b, c, d, 1),
- 'last': _algos.group_last_object
+ 'add' : 'group_add',
+ 'prod' : 'group_prod',
+ 'min' : 'group_min',
+ 'max' : 'group_max',
+ 'mean' : 'group_mean',
+ 'median': dict(name = 'group_median'),
+ 'var' : 'group_var',
+ 'std' : 'group_var',
+ 'first': dict(name = 'group_nth', f = lambda func, a, b, c, d: func(a, b, c, d, 1)),
+ 'last' : 'group_last',
}
_cython_transforms = {
@@ -737,6 +769,40 @@ def get_group_levels(self):
_filter_empty_groups = True
+ def _get_aggregate_function(self, how, values):
+
+ dtype_str = values.dtype.name
+ def get_func(fname):
+
+ # find the function, or use the object function, or return a generic
+ for dt in [dtype_str,'object']:
+ f = getattr(_algos,"%s_%s" % (fname,dtype_str),None)
+ if f is not None:
+ return f
+ return getattr(_algos,fname,None)
+
+ ftype = self._cython_functions[how]
+
+ if isinstance(ftype,dict):
+ func = afunc = get_func(ftype['name'])
+
+ # a sub-function
+ f = ftype.get('f')
+ if f is not None:
+
+ def wrapper(*args, **kwargs):
+ return f(afunc, *args, **kwargs)
+
+ # need to curry our sub-function
+ func = wrapper
+
+ else:
+ func = get_func(ftype)
+
+ if func is None:
+ raise NotImplementedError("function is not implemented for this dtype: [how->%s,dtype->%s]" % (how,dtype_str))
+ return func, dtype_str
+
def aggregate(self, values, how, axis=0):
arity = self._cython_arity.get(how, 1)
@@ -796,12 +862,8 @@ def aggregate(self, values, how, axis=0):
return result, names
def _aggregate(self, result, counts, values, how, is_numeric):
- if not is_numeric:
- agg_func = self._cython_object_functions[how]
- else:
- agg_func = self._cython_functions[how]
-
- trans_func = self._cython_transforms.get(how, lambda x: x)
+ agg_func,dtype = self._get_aggregate_function(how, values)
+ trans_func = self._cython_transforms.get(how, lambda x: x)
comp_ids, _, ngroups = self.group_info
if values.ndim > 3:
@@ -809,8 +871,9 @@ def _aggregate(self, result, counts, values, how, is_numeric):
raise NotImplementedError
elif values.ndim > 2:
for i, chunk in enumerate(values.transpose(2, 0, 1)):
- agg_func(result[:, :, i], counts, chunk.squeeze(),
- comp_ids)
+
+ chunk = chunk.squeeze()
+ agg_func(result[:, :, i], counts, chunk, comp_ids)
else:
agg_func(result, counts, values, comp_ids)
@@ -1000,21 +1063,16 @@ def names(self):
# cython aggregation
_cython_functions = {
- 'add': _algos.group_add_bin,
- 'prod': _algos.group_prod_bin,
- 'mean': _algos.group_mean_bin,
- 'min': _algos.group_min_bin,
- 'max': _algos.group_max_bin,
- 'var': _algos.group_var_bin,
- 'std': _algos.group_var_bin,
- 'ohlc': _algos.group_ohlc,
- 'first': lambda a, b, c, d: _algos.group_nth_bin(a, b, c, d, 1),
- 'last': _algos.group_last_bin
- }
-
- _cython_object_functions = {
- 'first': lambda a, b, c, d: _algos.group_nth_bin_object(a, b, c, d, 1),
- 'last': _algos.group_last_bin_object
+ 'add' : 'group_add_bin',
+ 'prod' : 'group_prod_bin',
+ 'mean' : 'group_mean_bin',
+ 'min' : 'group_min_bin',
+ 'max' : 'group_max_bin',
+ 'var' : 'group_var_bin',
+ 'std' : 'group_var_bin',
+ 'ohlc' : 'group_ohlc',
+ 'first': dict(name = 'group_nth_bin', f = lambda func, a, b, c, d: func(a, b, c, d, 1)),
+ 'last' : 'group_last_bin',
}
_name_functions = {
@@ -1024,11 +1082,9 @@ def names(self):
_filter_empty_groups = True
def _aggregate(self, result, counts, values, how, is_numeric=True):
- fdict = self._cython_functions
- if not is_numeric:
- fdict = self._cython_object_functions
- agg_func = fdict[how]
- trans_func = self._cython_transforms.get(how, lambda x: x)
+
+ agg_func,dtype = self._get_aggregate_function(how, values)
+ trans_func = self._cython_transforms.get(how, lambda x: x)
if values.ndim > 3:
# punting for now
@@ -1439,7 +1495,7 @@ def _aggregate_named(self, func, *args, **kwargs):
output = func(group, *args, **kwargs)
if isinstance(output, np.ndarray):
raise Exception('Must produce aggregated value')
- result[name] = output
+ result[name] = self._try_cast(output, group)
return result
@@ -1676,14 +1732,14 @@ def _aggregate_generic(self, func, *args, **kwargs):
for name, data in self:
# for name in self.indices:
# data = self.get_group(name, obj=obj)
- result[name] = func(data, *args, **kwargs)
+ result[name] = self._try_cast(func(data, *args, **kwargs),data)
except Exception:
return self._aggregate_item_by_item(func, *args, **kwargs)
else:
for name in self.indices:
try:
data = self.get_group(name, obj=obj)
- result[name] = func(data, *args, **kwargs)
+ result[name] = self._try_cast(func(data, *args, **kwargs), data)
except Exception:
wrapper = lambda x: func(x, *args, **kwargs)
result[name] = data.apply(wrapper, axis=axis)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index e3031b58ff286..58d193a956491 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -20,6 +20,10 @@ class Block(object):
Index-ignorant; let the container take care of that
"""
__slots__ = ['items', 'ref_items', '_ref_locs', 'values', 'ndim']
+ is_numeric = False
+ is_bool = False
+ is_object = False
+ _can_hold_na = False
def __init__(self, values, items, ref_items, ndim=2):
if issubclass(values.dtype.type, basestring):
@@ -93,6 +97,10 @@ def __setstate__(self, state):
def shape(self):
return self.values.shape
+ @property
+ def itemsize(self):
+ return self.values.itemsize
+
@property
def dtype(self):
return self.values.dtype
@@ -206,8 +214,13 @@ def split_block_at(self, item):
self.ref_items)
def fillna(self, value, inplace=False):
- new_values = self.values if inplace else self.values.copy()
+ if not self._can_hold_na:
+ if inplace:
+ return self
+ else:
+ return self.copy()
+ new_values = self.values if inplace else self.values.copy()
mask = com.isnull(new_values)
np.putmask(new_values, mask, value)
@@ -216,12 +229,43 @@ def fillna(self, value, inplace=False):
else:
return make_block(new_values, self.items, self.ref_items)
+ def astype(self, dtype, copy = True, raise_on_error = True):
+ """ coerce to the new type (if copy=True, return a new copy) raise on an except if raise == True """
+ try:
+ newb = make_block(com._astype_nansafe(self.values, dtype, copy = copy),
+ self.items, self.ref_items)
+ except:
+ if raise_on_error is True:
+ raise
+ newb = self.copy() if copy else self
+
+ if newb.is_numeric and self.is_numeric:
+ if newb.shape != self.shape or (not copy and newb.itemsize < self.itemsize):
+ raise TypeError("cannot set astype for copy = [%s] for dtype (%s [%s]) with smaller itemsize that current (%s [%s])" % (copy,
+ self.dtype.name,
+ self.itemsize,
+ newb.dtype.name,
+ newb.itemsize))
+ return newb
+
+ def convert(self, copy = True, **kwargs):
+ """ attempt to coerce any object types to better types
+ return a copy of the block (if copy = True)
+ by definition we are not an ObjectBlock here! """
+
+ return self.copy() if copy else self
+
def _can_hold_element(self, value):
raise NotImplementedError()
def _try_cast(self, value):
raise NotImplementedError()
+ def _try_cast_result(self, result):
+ """ try to cast the result to our original type,
+ we may have roundtripped thru object in the mean-time """
+ return result
+
def replace(self, to_replace, value, inplace=False):
new_values = self.values if inplace else self.values.copy()
if self._can_hold_element(value):
@@ -251,17 +295,58 @@ def replace(self, to_replace, value, inplace=False):
return make_block(new_values, self.items, self.ref_items)
def putmask(self, mask, new, inplace=False):
+ """ putmask the data to the block; it is possible that we may create a new dtype of block
+ return the resulting block(s) """
+
new_values = self.values if inplace else self.values.copy()
+
+ # may need to align the new
+ if hasattr(new,'reindex_axis'):
+ axis = getattr(new,'_het_axis',0)
+ new = new.reindex_axis(self.items, axis=axis, copy=False).values.T
+
+ # may need to align the mask
+ if hasattr(mask,'reindex_axis'):
+ axis = getattr(mask,'_het_axis',0)
+ mask = mask.reindex_axis(self.items, axis=axis, copy=False).values.T
+
if self._can_hold_element(new):
new = self._try_cast(new)
np.putmask(new_values, mask, new)
- if inplace:
- return self
+
+ # upcast me
else:
- return make_block(new_values, self.items, self.ref_items)
+
+ # type of the new block
+ if isinstance(new,np.ndarray) and issubclass(new.dtype,np.number) or issubclass(type(new),float):
+ typ = float
+ else:
+ typ = object
+
+ # we need to exiplicty astype here to make a copy
+ new_values = new_values.astype(typ)
+
+ # we create a new block type
+ np.putmask(new_values, mask, new)
+ return [ make_block(new_values, self.items, self.ref_items) ]
+
+ if inplace:
+ return [ self ]
+
+ return [ make_block(new_values, self.items, self.ref_items) ]
def interpolate(self, method='pad', axis=0, inplace=False,
- limit=None, missing=None):
+ limit=None, missing=None, coerce=False):
+
+ # if we are coercing, then don't force the conversion
+ # if the block can't hold the type
+ if coerce:
+ if not self._can_hold_na:
+ if inplace:
+ return self
+ else:
+ return self.copy()
+
values = self.values if inplace else self.values.copy()
if values.ndim != 2:
@@ -293,9 +378,96 @@ def get_values(self, dtype):
return self.values
def diff(self, n):
+ """ return block for the diff of the values """
new_values = com.diff(self.values, n, axis=1)
return make_block(new_values, self.items, self.ref_items)
+ def shift(self, indexer, periods):
+ """ shift the block by periods, possibly upcast """
+
+ new_values = self.values.take(indexer, axis=1)
+ # convert integer to float if necessary. need to do a lot more than
+ # that, handle boolean etc also
+ new_values = com._maybe_upcast(new_values)
+ if periods > 0:
+ new_values[:, :periods] = np.nan
+ else:
+ new_values[:, periods:] = np.nan
+ return make_block(new_values, self.items, self.ref_items)
+
+ def where(self, func, other, cond = None, raise_on_error = True, try_cast = False):
+ """
+ evaluate the block; return result block(s) from the result
+
+ Parameters
+ ----------
+ func : how to combine self,other
+ other : a ndarray/object
+ cond : the condition to respect, optional
+ raise_on_error : if True, raise when I can't perform the function, False by default (and just return
+ the data that we had coming in)
+
+ Returns
+ -------
+ a new block, the result of the func
+ """
+
+ values = self.values
+
+ # see if we can align other
+ if hasattr(other,'reindex_axis'):
+ axis = getattr(other,'_het_axis',0)
+ other = other.reindex_axis(self.items, axis=axis, copy=True).values
+
+ # make sure that we can broadcast
+ is_transposed = False
+ if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
+ if values.ndim != other.ndim or values.shape == other.shape[::-1]:
+ values = values.T
+ is_transposed = True
+
+ # see if we can align cond
+ if cond is not None:
+ if not hasattr(cond,'shape'):
+ raise ValueError("where must have a condition that is ndarray like")
+ if hasattr(cond,'reindex_axis'):
+ axis = getattr(cond,'_het_axis',0)
+ cond = cond.reindex_axis(self.items, axis=axis, copy=True).values
+ else:
+ cond = cond.values
+
+ # may need to undo transpose of values
+ if hasattr(values, 'ndim'):
+ if values.ndim != cond.ndim or values.shape == cond.shape[::-1]:
+ values = values.T
+ is_transposed = not is_transposed
+
+ args = [ values, other ]
+ if cond is not None:
+ args.append(cond)
+ try:
+ result = func(*args)
+ except:
+ if raise_on_error:
+ raise TypeError('Coulnd not operate %s with block values'
+ % repr(other))
+ else:
+ # return the values
+ result = np.empty(values.shape,dtype='O')
+ result.fill(np.nan)
+
+ if not isinstance(result, np.ndarray):
+ raise TypeError('Could not compare %s with block values'
+ % repr(other))
+
+ if is_transposed:
+ result = result.T
+
+ # try to cast if requested
+ if try_cast:
+ result = self._try_cast_result(result)
+
+ return [ make_block(result, self.items, self.ref_items) ]
def _mask_missing(array, missing_values):
if not isinstance(missing_values, (list, np.ndarray)):
@@ -314,11 +486,15 @@ def _mask_missing(array, missing_values):
mask |= array == missing_values
return mask
-
-class FloatBlock(Block):
+class NumericBlock(Block):
+ is_numeric = True
_can_hold_na = True
+class FloatBlock(NumericBlock):
+
def _can_hold_element(self, element):
+ if isinstance(element, np.ndarray):
+ return issubclass(element.dtype.type, (np.floating,np.integer))
return isinstance(element, (float, int))
def _try_cast(self, element):
@@ -330,11 +506,10 @@ def _try_cast(self, element):
def should_store(self, value):
# when inserting a column should not coerce integers to floats
# unnecessarily
- return issubclass(value.dtype.type, np.floating)
+ return issubclass(value.dtype.type, np.floating) and value.dtype == self.dtype
-class ComplexBlock(Block):
- _can_hold_na = True
+class ComplexBlock(NumericBlock):
def _can_hold_element(self, element):
return isinstance(element, complex)
@@ -349,10 +524,12 @@ def should_store(self, value):
return issubclass(value.dtype.type, np.complexfloating)
-class IntBlock(Block):
+class IntBlock(NumericBlock):
_can_hold_na = False
def _can_hold_element(self, element):
+ if isinstance(element, np.ndarray):
+ return issubclass(element.dtype.type, np.integer)
return com.is_integer(element)
def _try_cast(self, element):
@@ -361,11 +538,25 @@ def _try_cast(self, element):
except: # pragma: no cover
return element
+ def _try_cast_result(self, result):
+ # this is quite restrictive to convert
+ try:
+ if isinstance(result, np.ndarray) and issubclass(result.dtype.type, np.floating):
+ if com.notnull(result).all():
+ new_result = result.astype(self.dtype)
+ if (new_result == result).all():
+ return new_result
+ except:
+ pass
+
+ return result
+
def should_store(self, value):
- return com.is_integer_dtype(value)
+ return com.is_integer_dtype(value) and value.dtype == self.dtype
class BoolBlock(Block):
+ is_bool = True
_can_hold_na = False
def _can_hold_element(self, element):
@@ -382,8 +573,35 @@ def should_store(self, value):
class ObjectBlock(Block):
+ is_object = True
_can_hold_na = True
+ @property
+ def is_bool(self):
+ """ we can be a bool if we have only bool values but are of type object """
+ return lib.is_bool_array(self.values.flatten())
+
+ def convert(self, convert_dates = True, convert_numeric = True, copy = True):
+ """ attempt to coerce any object types to better types
+ return a copy of the block (if copy = True)
+ by definition we ARE an ObjectBlock!!!!!
+
+ can return multiple blocks!
+ """
+
+ # attempt to create new type blocks
+ blocks = []
+ for i, c in enumerate(self.items):
+ values = self.get(c)
+
+ values = com._possibly_convert_objects(values, convert_dates=convert_dates, convert_numeric=convert_numeric)
+ values = values.reshape(((1,) + values.shape))
+ items = self.items.take([i])
+ newb = make_block(values, items, self.ref_items)
+ blocks.append(newb)
+
+ return blocks
+
def _can_hold_element(self, element):
return True
@@ -457,8 +675,6 @@ def make_block(values, items, ref_items):
elif issubclass(vtype, np.datetime64):
klass = DatetimeBlock
elif issubclass(vtype, np.integer):
- if vtype != np.int64:
- values = values.astype('i8')
klass = IntBlock
elif dtype == np.bool_:
klass = BoolBlock
@@ -611,15 +827,70 @@ def _verify_integrity(self):
raise AssertionError('Number of manager items must equal union of '
'block items')
- def astype(self, dtype):
- new_blocks = []
- for block in self.blocks:
- newb = make_block(com._astype_nansafe(block.values, dtype),
- block.items, block.ref_items)
- new_blocks.append(newb)
+ def apply(self, f, *args, **kwargs):
+ """ iterate over the blocks, collect and create a new block manager """
+ axes = kwargs.pop('axes',None)
+ result_blocks = []
+ for blk in self.blocks:
+ if callable(f):
+ applied = f(blk, *args, **kwargs)
+ else:
+ applied = getattr(blk,f)(*args, **kwargs)
+
+ if isinstance(applied,list):
+ result_blocks.extend(applied)
+ else:
+ result_blocks.append(applied)
+ bm = self.__class__(result_blocks, axes or self.axes)
+ bm._consolidate_inplace()
+ return bm
+
+ def where(self, *args, **kwargs):
+ return self.apply('where', *args, **kwargs)
+
+ def putmask(self, *args, **kwargs):
+ return self.apply('putmask', *args, **kwargs)
+
+ def diff(self, *args, **kwargs):
+ return self.apply('diff', *args, **kwargs)
+
+ def interpolate(self, *args, **kwargs):
+ return self.apply('interpolate', *args, **kwargs)
+
+ def shift(self, *args, **kwargs):
+ return self.apply('shift', *args, **kwargs)
+
+ def fillna(self, *args, **kwargs):
+ return self.apply('fillna', *args, **kwargs)
+
+ def astype(self, *args, **kwargs):
+ return self.apply('astype', *args, **kwargs)
+
+ def convert(self, *args, **kwargs):
+ return self.apply('convert', *args, **kwargs)
+
+ def replace(self, *args, **kwargs):
+ return self.apply('replace', *args, **kwargs)
- new_mgr = BlockManager(new_blocks, self.axes)
- return new_mgr.consolidate()
+ def replace_list(self, src_lst, dest_lst, inplace=False):
+ """ do a list replace """
+ if not inplace:
+ self = self.copy()
+
+ sset = set(src_lst)
+ if any([k in sset for k in dest_lst]):
+ masks = {}
+ for s in src_lst:
+ masks[s] = [b.values == s for b in self.blocks]
+
+ for s, d in zip(src_lst, dest_lst):
+ [b.putmask(masks[s][i], d, inplace=True) for i, b in
+ enumerate(self.blocks)]
+ else:
+ for s, d in zip(src_lst, dest_lst):
+ self.replace(s, d, inplace=True)
+
+ return self
def is_consolidated(self):
"""
@@ -634,7 +905,7 @@ def _consolidate_check(self):
self._is_consolidated = len(dtypes) == len(set(dtypes))
self._known_consolidated = True
- def get_numeric_data(self, copy=False, type_list=None):
+ def get_numeric_data(self, copy=False, type_list=None, as_blocks = False):
"""
Parameters
----------
@@ -644,15 +915,15 @@ def get_numeric_data(self, copy=False, type_list=None):
Numeric types by default (Float/Complex/Int but not Datetime)
"""
if type_list is None:
- def filter_blocks(block):
- return (isinstance(block, (IntBlock, FloatBlock, ComplexBlock))
- and not isinstance(block, DatetimeBlock))
+ filter_blocks = lambda block: block.is_numeric
else:
type_list = self._get_clean_block_types(type_list)
filter_blocks = lambda block: isinstance(block, type_list)
maybe_copy = lambda b: b.copy() if copy else b
num_blocks = [maybe_copy(b) for b in self.blocks if filter_blocks(b)]
+ if as_blocks:
+ return num_blocks
if len(num_blocks) == 0:
return BlockManager.make_empty()
@@ -686,8 +957,8 @@ def _get_clean_block_types(self, type_list):
type_list = tuple([type_map.get(t, t) for t in type_list])
return type_list
- def get_bool_data(self, copy=False):
- return self.get_numeric_data(copy=copy, type_list=(BoolBlock,))
+ def get_bool_data(self, copy=False, as_blocks=False):
+ return self.get_numeric_data(copy=copy, type_list=(BoolBlock,), as_blocks=as_blocks)
def get_slice(self, slobj, axis=0):
new_axes = list(self.axes)
@@ -1255,37 +1526,6 @@ def add_suffix(self, suffix):
f = ('%s' + ('%s' % suffix)).__mod__
return self.rename_items(f)
- def fillna(self, value, inplace=False):
- new_blocks = [b.fillna(value, inplace=inplace)
- if b._can_hold_na else b
- for b in self.blocks]
- if inplace:
- return self
- return BlockManager(new_blocks, self.axes)
-
- def replace(self, to_replace, value, inplace=False):
- new_blocks = [b.replace(to_replace, value, inplace=inplace)
- for b in self.blocks]
- if inplace:
- return self
- return BlockManager(new_blocks, self.axes)
-
- def _replace_list(self, src_lst, dest_lst):
- sset = set(src_lst)
- if any([k in sset for k in dest_lst]):
- masks = {}
- for s in src_lst:
- masks[s] = [b.values == s for b in self.blocks]
-
- for s, d in zip(src_lst, dest_lst):
- [b.putmask(masks[s][i], d, inplace=True) for i, b in
- enumerate(self.blocks)]
- else:
- for s, d in zip(src_lst, dest_lst):
- self.replace(s, d, inplace=True)
-
- return self
-
@property
def block_id_vector(self):
# TODO
@@ -1359,28 +1599,28 @@ def form_blocks(arrays, names, axes):
blocks = []
if len(float_items):
- float_block = _simple_blockify(float_items, items, np.float64)
- blocks.append(float_block)
+ float_blocks = _multi_blockify(float_items, items)
+ blocks.extend(float_blocks)
if len(complex_items):
- complex_block = _simple_blockify(complex_items, items, np.complex128)
- blocks.append(complex_block)
+ complex_blocks = _simple_blockify(complex_items, items, np.complex128)
+ blocks.extend(complex_blocks)
if len(int_items):
- int_block = _simple_blockify(int_items, items, np.int64)
- blocks.append(int_block)
+ int_blocks = _multi_blockify(int_items, items)
+ blocks.extend(int_blocks)
if len(datetime_items):
- datetime_block = _simple_blockify(datetime_items, items, _NS_DTYPE)
- blocks.append(datetime_block)
+ datetime_blocks = _simple_blockify(datetime_items, items, _NS_DTYPE)
+ blocks.extend(datetime_blocks)
if len(bool_items):
- bool_block = _simple_blockify(bool_items, items, np.bool_)
- blocks.append(bool_block)
+ bool_blocks = _simple_blockify(bool_items, items, np.bool_)
+ blocks.extend(bool_blocks)
if len(object_items) > 0:
- object_block = _simple_blockify(object_items, items, np.object_)
- blocks.append(object_block)
+ object_blocks = _simple_blockify(object_items, items, np.object_)
+ blocks.extend(object_blocks)
if len(extra_items):
shape = (len(extra_items),) + tuple(len(x) for x in axes[1:])
@@ -1398,14 +1638,31 @@ def form_blocks(arrays, names, axes):
def _simple_blockify(tuples, ref_items, dtype):
+ """ return a single array of a block that has a single dtype; if dtype is not None, coerce to this dtype """
block_items, values = _stack_arrays(tuples, ref_items, dtype)
+
# CHECK DTYPE?
- if values.dtype != dtype: # pragma: no cover
+ if dtype is not None and values.dtype != dtype: # pragma: no cover
values = values.astype(dtype)
- return make_block(values, block_items, ref_items)
+ return [ make_block(values, block_items, ref_items) ]
+def _multi_blockify(tuples, ref_items, dtype = None):
+ """ return an array of blocks that potentially have different dtypes """
+
+ # group by dtype
+ grouper = itertools.groupby(tuples, lambda x: x[1].dtype)
+
+ new_blocks = []
+ for dtype, tup_block in grouper:
+
+ block_items, values = _stack_arrays(list(tup_block), ref_items, dtype)
+ block = make_block(values, block_items, ref_items)
+ new_blocks.append(block)
+
+ return new_blocks
+
def _stack_arrays(tuples, ref_items, dtype):
from pandas.core.series import Series
@@ -1451,17 +1708,27 @@ def _blocks_to_series_dict(blocks, index=None):
def _interleaved_dtype(blocks):
+ if not len(blocks): return None
+
from collections import defaultdict
- counts = defaultdict(lambda: 0)
+ counts = defaultdict(lambda: [])
for x in blocks:
- counts[type(x)] += 1
-
- have_int = counts[IntBlock] > 0
- have_bool = counts[BoolBlock] > 0
- have_object = counts[ObjectBlock] > 0
- have_float = counts[FloatBlock] > 0
- have_complex = counts[ComplexBlock] > 0
- have_dt64 = counts[DatetimeBlock] > 0
+ counts[type(x)].append(x)
+
+ def _lcd_dtype(l):
+ """ find the lowest dtype that can accomodate the given types """
+ m = l[0].dtype
+ for x in l[1:]:
+ if x.dtype.itemsize > m.itemsize:
+ m = x.dtype
+ return m
+
+ have_int = len(counts[IntBlock]) > 0
+ have_bool = len(counts[BoolBlock]) > 0
+ have_object = len(counts[ObjectBlock]) > 0
+ have_float = len(counts[FloatBlock]) > 0
+ have_complex = len(counts[ComplexBlock]) > 0
+ have_dt64 = len(counts[DatetimeBlock]) > 0
have_numeric = have_float or have_complex or have_int
if (have_object or
@@ -1471,13 +1738,13 @@ def _interleaved_dtype(blocks):
elif have_bool:
return np.dtype(bool)
elif have_int and not have_float and not have_complex:
- return np.dtype('i8')
+ return _lcd_dtype(counts[IntBlock])
elif have_dt64 and not have_float and not have_complex:
return np.dtype('M8[ns]')
elif have_complex:
return np.dtype('c16')
else:
- return np.dtype('f8')
+ return _lcd_dtype(counts[FloatBlock])
def _consolidate(blocks, items):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 06281e288021a..76c91ad726868 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -779,18 +779,23 @@ def astype(self, dtype):
casted = com._astype_nansafe(self.values, dtype)
return self._constructor(casted, index=self.index, name=self.name)
- def convert_objects(self, convert_dates=True):
+ def convert_objects(self, convert_dates=True, convert_numeric=True):
"""
Attempt to infer better dtype
+ Always return a copy
+
+ Parameters
+ ----------
+ convert_dates : if True, attempt to soft convert_dates, if 'coerce', force conversion (and non-convertibles get NaT)
+ convert_numeric : if True attempt to coerce to numerbers (including strings), non-convertibles get NaN
Returns
-------
converted : Series
"""
if self.dtype == np.object_:
- return Series(lib.maybe_convert_objects(
- self, convert_datetime=convert_dates), self.index)
- return self
+ return Series(com._possibly_convert_objects(self.values,convert_dates=convert_dates,convert_numeric=convert_numeric), index=self.index, name=self.name)
+ return self.copy()
def repeat(self, reps):
"""
@@ -1027,7 +1032,8 @@ def _tidy_repr(self, max_vals=20):
def _repr_footer(self):
namestr = u"Name: %s, " % com.pprint_thing(
self.name) if self.name is not None else ""
- return u'%sLength: %d' % (namestr, len(self))
+ return u'%sLength: %d, Dtype: %s' % (namestr, len(self),
+ com.pprint_thing(self.dtype.name))
def to_string(self, buf=None, na_rep='NaN', float_format=None,
nanRep=None, length=False, name=False):
@@ -2402,6 +2408,12 @@ def reindex(self, index=None, method=None, level=None, fill_value=np.nan,
new_values = com.take_1d(self.values, indexer, fill_value=fill_value)
return Series(new_values, index=new_index, name=self.name)
+ def reindex_axis(self, labels, axis=0, **kwargs):
+ """ for compatibility with higher dims """
+ if axis != 0:
+ raise ValueError("cannot reindex series on non-zero axis!")
+ return self.reindex(index=labels,**kwargs)
+
def reindex_like(self, other, method=None, limit=None):
"""
Reindex Series to match index of another Series, optionally with
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 6801b197fa849..556bcdb93477f 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -362,11 +362,11 @@ def _check_extension_int64(self, ext):
self.frame.to_excel(path, 'test1', index=False)
# Test np.int64, values read come back as float
- frame = DataFrame(np.random.randint(-10, 10, size=(10, 2)))
+ frame = DataFrame(np.random.randint(-10, 10, size=(10, 2)), dtype=np.int64)
frame.to_excel(path, 'test1')
reader = ExcelFile(path)
recons = reader.parse('test1').astype(np.int64)
- tm.assert_frame_equal(frame, recons)
+ tm.assert_frame_equal(frame, recons, check_dtype=False)
os.remove(path)
diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py
index c7897e7def4d3..9cc749d23a3a9 100644
--- a/pandas/src/generate_code.py
+++ b/pandas/src/generate_code.py
@@ -388,6 +388,10 @@ def pad_inplace_%(name)s(ndarray[%(c_type)s] values,
N = len(values)
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -419,6 +423,10 @@ def pad_2d_inplace_%(name)s(ndarray[%(c_type)s, ndim=2] values,
K, N = (<object> values).shape
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -451,6 +459,10 @@ def backfill_2d_inplace_%(name)s(ndarray[%(c_type)s, ndim=2] values,
K, N = (<object> values).shape
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -483,6 +495,10 @@ def backfill_inplace_%(name)s(ndarray[%(c_type)s] values,
N = len(values)
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -502,6 +518,52 @@ def backfill_inplace_%(name)s(ndarray[%(c_type)s] values,
val = values[i]
"""
+
+diff_2d_template = """@cython.boundscheck(False)
+@cython.wraparound(False)
+def diff_2d_%(name)s(ndarray[%(c_type)s, ndim=2] arr,
+ ndarray[%(dest_type2)s, ndim=2] out,
+ Py_ssize_t periods, int axis):
+ cdef:
+ Py_ssize_t i, j, sx, sy
+
+ sx, sy = (<object> arr).shape
+ if arr.flags.f_contiguous:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for j in range(sy):
+ for i in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for j in range(start, stop):
+ for i in range(sx):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+ else:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for i in range(start, stop):
+ for j in range(sy):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for i in range(sx):
+ for j in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+"""
+
is_monotonic_template = """@cython.boundscheck(False)
@cython.wraparound(False)
def is_monotonic_%(name)s(ndarray[%(c_type)s] arr):
@@ -582,6 +644,965 @@ def groupby_%(name)s(ndarray[%(c_type)s] index, ndarray labels):
"""
+group_last_template = """@cython.wraparound(False)
+@cython.wraparound(False)
+def group_last_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(c_type)s, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] resx
+ ndarray[int64_t, ndim=2] nobs
+
+ nobs = np.zeros((<object> out).shape, dtype=np.int64)
+ resx = np.empty_like(out)
+
+ N, K = (<object> values).shape
+
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ resx[lab, j] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+"""
+
+group_last_bin_template = """@cython.wraparound(False)
+@cython.wraparound(False)
+def group_last_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(c_type)s, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] resx, nobs
+
+ nobs = np.zeros_like(out)
+ resx = np.empty_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ resx[b, j] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+"""
+
+group_nth_bin_template = """@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_nth_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(c_type)s, ndim=2] values,
+ ndarray[int64_t] bins, int64_t rank):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] resx, nobs
+
+ nobs = np.zeros_like(out)
+ resx = np.empty_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if nobs[b, j] == rank:
+ resx[b, j] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+"""
+
+group_nth_template = """@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_nth_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(c_type)s, ndim=2] values,
+ ndarray[int64_t] labels, int64_t rank):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] resx
+ ndarray[int64_t, ndim=2] nobs
+
+ nobs = np.zeros((<object> out).shape, dtype=np.int64)
+ resx = np.empty_like(out)
+
+ N, K = (<object> values).shape
+
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if nobs[lab, j] == rank:
+ resx[lab, j] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+"""
+
+group_add_template = """@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_add_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(c_type)s, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j]
+"""
+
+group_add_bin_template = """@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_add_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b, nbins
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j]
+"""
+
+group_prod_template = """@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_prod_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(c_type)s, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] prodx, nobs
+
+ nobs = np.zeros_like(out)
+ prodx = np.ones_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ prodx[lab, j] *= val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ prodx[lab, 0] *= val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = prodx[i, j]
+"""
+
+group_prod_bin_template = """@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_prod_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] prodx, nobs
+
+ nobs = np.zeros_like(out)
+ prodx = np.ones_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ prodx[b, j] *= val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ prodx[b, 0] *= val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = prodx[i, j]
+"""
+
+group_var_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_var_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] labels):
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, ct
+ ndarray[%(dest_type2)s, ndim=2] nobs, sumx, sumxx
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+ sumxx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ sumxx[lab, j] += val * val
+ else:
+ for i in range(N):
+
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+ sumxx[lab, 0] += val * val
+
+
+ for i in range(len(counts)):
+ for j in range(K):
+ ct = nobs[i, j]
+ if ct < 2:
+ out[i, j] = nan
+ else:
+ out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
+ (ct * ct - ct))
+"""
+
+group_var_bin_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_var_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] bins):
+
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, ct
+ ndarray[%(dest_type2)s, ndim=2] nobs, sumx, sumxx
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+ sumxx = np.zeros_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ sumxx[b, j] += val * val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+ sumxx[b, 0] += val * val
+
+ for i in range(ngroups):
+ for j in range(K):
+ ct = nobs[i, j]
+ if ct < 2:
+ out[i, j] = nan
+ else:
+ out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
+ (ct * ct - ct))
+"""
+
+# add passing bin edges, instead of labels
+
+
+#----------------------------------------------------------------------
+# group_min, group_max
+
+group_min_bin_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_min_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] minx, nobs
+
+ nobs = np.zeros_like(out)
+
+ minx = np.empty_like(out)
+ minx.fill(np.inf)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if val < minx[b, j]:
+ minx[b, j] = val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ if val < minx[b, 0]:
+ minx[b, 0] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = minx[i, j]
+"""
+
+group_max_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_max_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] maxx, nobs
+
+ nobs = np.zeros_like(out)
+
+ maxx = np.empty_like(out)
+ maxx.fill(-np.inf)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if val > maxx[lab, j]:
+ maxx[lab, j] = val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ if val > maxx[lab, 0]:
+ maxx[lab, 0] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = maxx[i, j]
+"""
+
+group_max_bin_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_max_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] maxx, nobs
+
+ nobs = np.zeros_like(out)
+ maxx = np.empty_like(out)
+ maxx.fill(-np.inf)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if val > maxx[b, j]:
+ maxx[b, j] = val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ if val > maxx[b, 0]:
+ maxx[b, 0] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = maxx[i, j]
+"""
+
+
+group_min_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_min_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] minx, nobs
+
+ nobs = np.zeros_like(out)
+
+ minx = np.empty_like(out)
+ minx.fill(np.inf)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if val < minx[lab, j]:
+ minx[lab, j] = val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ if val < minx[lab, 0]:
+ minx[lab, 0] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = minx[i, j]
+"""
+
+
+group_mean_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_mean_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] labels):
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ count = nobs[i, j]
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j] / count
+"""
+
+group_mean_bin_template = """
+def group_mean_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] bins):
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, count
+ ndarray[%(dest_type2)s, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+
+ for i in range(ngroups):
+ for j in range(K):
+ count = nobs[i, j]
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j] / count
+"""
+
+group_ohlc_template = """@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_ohlc_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[%(dest_type2)s, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ %(dest_type2)s val, count
+ %(dest_type2)s vopen, vhigh, vlow, vclose, NA
+ bint got_first = 0
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ if out.shape[1] != 4:
+ raise ValueError('Output array must have 4 columns')
+
+ NA = np.nan
+
+ b = 0
+ if K > 1:
+ raise NotImplementedError
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ if not got_first:
+ out[b, 0] = NA
+ out[b, 1] = NA
+ out[b, 2] = NA
+ out[b, 3] = NA
+ else:
+ out[b, 0] = vopen
+ out[b, 1] = vhigh
+ out[b, 2] = vlow
+ out[b, 3] = vclose
+ b += 1
+ got_first = 0
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ if not got_first:
+ got_first = 1
+ vopen = val
+ vlow = val
+ vhigh = val
+ else:
+ if val < vlow:
+ vlow = val
+ if val > vhigh:
+ vhigh = val
+ vclose = val
+
+ if not got_first:
+ out[b, 0] = NA
+ out[b, 1] = NA
+ out[b, 2] = NA
+ out[b, 3] = NA
+ else:
+ out[b, 0] = vopen
+ out[b, 1] = vhigh
+ out[b, 2] = vlow
+ out[b, 3] = vclose
+"""
+
arrmap_template = """@cython.wraparound(False)
@cython.boundscheck(False)
def arrmap_%(name)s(ndarray[%(c_type)s] index, object func):
@@ -1100,6 +2121,9 @@ def outer_join_indexer_%(name)s(ndarray[%(c_type)s] left,
ensure_functions = [
('float64', 'FLOAT64', 'float64'),
+ ('float32', 'FLOAT32', 'float32'),
+ ('int8', 'INT8', 'int8'),
+ ('int16', 'INT16', 'int16'),
('int32', 'INT32', 'int32'),
('int64', 'INT64', 'int64'),
# ('platform_int', 'INT', 'int_'),
@@ -1133,26 +2157,30 @@ def put2d_%(name)s_%(dest_type)s(ndarray[%(c_type)s, ndim=2, cast=True] values,
#-------------------------------------------------------------------------
# Generators
-def generate_put_functions():
- function_list = [
- ('float64', 'float64_t', 'object'),
- ('float64', 'float64_t', 'float64_t'),
- ('object', 'object', 'object'),
- ('int32', 'int32_t', 'int64_t'),
- ('int32', 'int32_t', 'float64_t'),
- ('int32', 'int32_t', 'object'),
- ('int64', 'int64_t', 'int64_t'),
- ('int64', 'int64_t', 'float64_t'),
- ('int64', 'int64_t', 'object'),
- ('bool', 'uint8_t', 'uint8_t'),
- ('bool', 'uint8_t', 'object')
- ]
+def generate_put_template(template, use_ints = True, use_floats = True):
+ floats_list = [
+ ('float64', 'float64_t', 'float64_t', 'np.float64'),
+ ('float32', 'float32_t', 'float32_t', 'np.float32'),
+ ]
+ ints_list = [
+ ('int8', 'int8_t', 'float32_t', 'np.float32'),
+ ('int16', 'int16_t', 'float32_t', 'np.float32'),
+ ('int32', 'int32_t', 'float64_t', 'np.float64'),
+ ('int64', 'int64_t', 'float64_t', 'np.float64'),
+ ]
+ function_list = []
+ if use_floats:
+ function_list.extend(floats_list)
+ if use_ints:
+ function_list.extend(ints_list)
output = StringIO()
- for name, c_type, dest_type in function_list:
- func = put2d_template % {'name' : name, 'c_type' : c_type,
- 'dest_type' : dest_type.replace('_t', ''),
- 'dest_type2' : dest_type}
+ for name, c_type, dest_type, dest_dtype in function_list:
+ func = template % {'name' : name,
+ 'c_type' : c_type,
+ 'dest_type' : dest_type.replace('_t', ''),
+ 'dest_type2' : dest_type,
+ 'dest_dtype' : dest_dtype}
output.write(func)
return output.getvalue()
@@ -1160,10 +2188,13 @@ def generate_put_functions():
# name, ctype, capable of holding NA
function_list = [
('float64', 'float64_t', 'np.float64', True),
- ('object', 'object', 'object', True),
+ ('float32', 'float32_t', 'np.float32', True),
+ ('object','object', 'object', True),
+ ('int8', 'int8_t', 'np.int8', False),
+ ('int16', 'int16_t', 'np.int16', False),
('int32', 'int32_t', 'np.int32', False),
('int64', 'int64_t', 'np.int64', False),
- ('bool', 'uint8_t', 'np.bool', False)
+ ('bool', 'uint8_t', 'np.bool', False)
]
def generate_from_template(template, ndim=1, exclude=None):
@@ -1178,6 +2209,25 @@ def generate_from_template(template, ndim=1, exclude=None):
output.write(func)
return output.getvalue()
+put_2d = [diff_2d_template]
+groupbys = [group_last_template,
+ group_last_bin_template,
+ group_nth_template,
+ group_nth_bin_template,
+ group_add_template,
+ group_add_bin_template,
+ group_prod_template,
+ group_prod_bin_template,
+ group_var_template,
+ group_var_bin_template,
+ group_mean_template,
+ group_mean_bin_template,
+ group_min_template,
+ group_min_bin_template,
+ group_max_template,
+ group_max_bin_template,
+ group_ohlc_template]
+
templates_1d = [map_indices_template,
pad_template,
backfill_template,
@@ -1211,6 +2261,12 @@ def generate_take_cython_file(path='generated.pyx'):
for template in templates_2d:
print >> f, generate_from_template(template, ndim=2)
+ for template in put_2d:
+ print >> f, generate_put_template(template)
+
+ for template in groupbys:
+ print >> f, generate_put_template(template, use_ints = False)
+
# for template in templates_1d_datetime:
# print >> f, generate_from_template_datetime(template)
diff --git a/pandas/src/generated.pyx b/pandas/src/generated.pyx
index 5ecd8439a13ec..a20fb5668aec9 100644
--- a/pandas/src/generated.pyx
+++ b/pandas/src/generated.pyx
@@ -57,6 +57,36 @@ cpdef ensure_float64(object arr):
return np.array(arr, dtype=np.float64)
+cpdef ensure_float32(object arr):
+ if util.is_array(arr):
+ if (<ndarray> arr).descr.type_num == NPY_FLOAT32:
+ return arr
+ else:
+ return arr.astype(np.float32)
+ else:
+ return np.array(arr, dtype=np.float32)
+
+
+cpdef ensure_int8(object arr):
+ if util.is_array(arr):
+ if (<ndarray> arr).descr.type_num == NPY_INT8:
+ return arr
+ else:
+ return arr.astype(np.int8)
+ else:
+ return np.array(arr, dtype=np.int8)
+
+
+cpdef ensure_int16(object arr):
+ if util.is_array(arr):
+ if (<ndarray> arr).descr.type_num == NPY_INT16:
+ return arr
+ else:
+ return arr.astype(np.int16)
+ else:
+ return np.array(arr, dtype=np.int16)
+
+
cpdef ensure_int32(object arr):
if util.is_array(arr):
if (<ndarray> arr).descr.type_num == NPY_INT32:
@@ -109,6 +139,28 @@ cpdef map_indices_float64(ndarray[float64_t] index):
return result
+@cython.wraparound(False)
+@cython.boundscheck(False)
+cpdef map_indices_float32(ndarray[float32_t] index):
+ '''
+ Produce a dict mapping the values of the input array to their respective
+ locations.
+
+ Example:
+ array(['hi', 'there']) --> {'hi' : 0 , 'there' : 1}
+
+ Better to do this with Cython because of the enormous speed boost.
+ '''
+ cdef Py_ssize_t i, length
+ cdef dict result = {}
+
+ length = len(index)
+
+ for i in range(length):
+ result[index[i]] = i
+
+ return result
+
@cython.wraparound(False)
@cython.boundscheck(False)
cpdef map_indices_object(ndarray[object] index):
@@ -131,6 +183,50 @@ cpdef map_indices_object(ndarray[object] index):
return result
+@cython.wraparound(False)
+@cython.boundscheck(False)
+cpdef map_indices_int8(ndarray[int8_t] index):
+ '''
+ Produce a dict mapping the values of the input array to their respective
+ locations.
+
+ Example:
+ array(['hi', 'there']) --> {'hi' : 0 , 'there' : 1}
+
+ Better to do this with Cython because of the enormous speed boost.
+ '''
+ cdef Py_ssize_t i, length
+ cdef dict result = {}
+
+ length = len(index)
+
+ for i in range(length):
+ result[index[i]] = i
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+cpdef map_indices_int16(ndarray[int16_t] index):
+ '''
+ Produce a dict mapping the values of the input array to their respective
+ locations.
+
+ Example:
+ array(['hi', 'there']) --> {'hi' : 0 , 'there' : 1}
+
+ Better to do this with Cython because of the enormous speed boost.
+ '''
+ cdef Py_ssize_t i, length
+ cdef dict result = {}
+
+ length = len(index)
+
+ for i in range(length):
+ result[index[i]] = i
+
+ return result
+
@cython.wraparound(False)
@cython.boundscheck(False)
cpdef map_indices_int32(ndarray[int32_t] index):
@@ -259,6 +355,67 @@ def pad_float64(ndarray[float64_t] old, ndarray[float64_t] new,
return indexer
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def pad_float32(ndarray[float32_t] old, ndarray[float32_t] new,
+ limit=None):
+ cdef Py_ssize_t i, j, nleft, nright
+ cdef ndarray[int64_t, ndim=1] indexer
+ cdef float32_t cur, next
+ cdef int lim, fill_count = 0
+
+ nleft = len(old)
+ nright = len(new)
+ indexer = np.empty(nright, dtype=np.int64)
+ indexer.fill(-1)
+
+ if limit is None:
+ lim = nright
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ if nleft == 0 or nright == 0 or new[nright - 1] < old[0]:
+ return indexer
+
+ i = j = 0
+
+ cur = old[0]
+
+ while j <= nright - 1 and new[j] < cur:
+ j += 1
+
+ while True:
+ if j == nright:
+ break
+
+ if i == nleft - 1:
+ while j < nright:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] > cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j += 1
+ break
+
+ next = old[i + 1]
+
+ while j < nright and cur <= new[j] < next:
+ if new[j] == cur:
+ indexer[j] = i
+ elif fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j += 1
+
+ fill_count = 0
+ i += 1
+ cur = next
+
+ return indexer
+
@cython.boundscheck(False)
@cython.wraparound(False)
def pad_object(ndarray[object] old, ndarray[object] new,
@@ -322,11 +479,11 @@ def pad_object(ndarray[object] old, ndarray[object] new,
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_int32(ndarray[int32_t] old, ndarray[int32_t] new,
+def pad_int8(ndarray[int8_t] old, ndarray[int8_t] new,
limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef int32_t cur, next
+ cdef int8_t cur, next
cdef int lim, fill_count = 0
nleft = len(old)
@@ -383,11 +540,11 @@ def pad_int32(ndarray[int32_t] old, ndarray[int32_t] new,
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_int64(ndarray[int64_t] old, ndarray[int64_t] new,
+def pad_int16(ndarray[int16_t] old, ndarray[int16_t] new,
limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef int64_t cur, next
+ cdef int16_t cur, next
cdef int lim, fill_count = 0
nleft = len(old)
@@ -444,11 +601,11 @@ def pad_int64(ndarray[int64_t] old, ndarray[int64_t] new,
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_bool(ndarray[uint8_t] old, ndarray[uint8_t] new,
+def pad_int32(ndarray[int32_t] old, ndarray[int32_t] new,
limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef uint8_t cur, next
+ cdef int32_t cur, next
cdef int lim, fill_count = 0
nleft = len(old)
@@ -503,14 +660,13 @@ def pad_bool(ndarray[uint8_t] old, ndarray[uint8_t] new,
return indexer
-
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_float64(ndarray[float64_t] old, ndarray[float64_t] new,
- limit=None):
+def pad_int64(ndarray[int64_t] old, ndarray[int64_t] new,
+ limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef float64_t cur, prev
+ cdef int64_t cur, next
cdef int lim, fill_count = 0
nleft = len(old)
@@ -525,54 +681,53 @@ def backfill_float64(ndarray[float64_t] old, ndarray[float64_t] new,
raise ValueError('Limit must be non-negative')
lim = limit
- if nleft == 0 or nright == 0 or new[0] > old[nleft - 1]:
+ if nleft == 0 or nright == 0 or new[nright - 1] < old[0]:
return indexer
- i = nleft - 1
- j = nright - 1
+ i = j = 0
- cur = old[nleft - 1]
+ cur = old[0]
- while j >= 0 and new[j] > cur:
- j -= 1
+ while j <= nright - 1 and new[j] < cur:
+ j += 1
while True:
- if j < 0:
+ if j == nright:
break
- if i == 0:
- while j >= 0:
+ if i == nleft - 1:
+ while j < nright:
if new[j] == cur:
indexer[j] = i
- elif new[j] < cur and fill_count < lim:
+ elif new[j] > cur and fill_count < lim:
indexer[j] = i
fill_count += 1
- j -= 1
+ j += 1
break
- prev = old[i - 1]
+ next = old[i + 1]
- while j >= 0 and prev < new[j] <= cur:
+ while j < nright and cur <= new[j] < next:
if new[j] == cur:
indexer[j] = i
- elif new[j] < cur and fill_count < lim:
+ elif fill_count < lim:
indexer[j] = i
fill_count += 1
- j -= 1
+ j += 1
fill_count = 0
- i -= 1
- cur = prev
+ i += 1
+ cur = next
return indexer
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_object(ndarray[object] old, ndarray[object] new,
- limit=None):
+def pad_bool(ndarray[uint8_t] old, ndarray[uint8_t] new,
+ limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef object cur, prev
+ cdef uint8_t cur, next
cdef int lim, fill_count = 0
nleft = len(old)
@@ -587,54 +742,54 @@ def backfill_object(ndarray[object] old, ndarray[object] new,
raise ValueError('Limit must be non-negative')
lim = limit
- if nleft == 0 or nright == 0 or new[0] > old[nleft - 1]:
+ if nleft == 0 or nright == 0 or new[nright - 1] < old[0]:
return indexer
- i = nleft - 1
- j = nright - 1
+ i = j = 0
- cur = old[nleft - 1]
+ cur = old[0]
- while j >= 0 and new[j] > cur:
- j -= 1
+ while j <= nright - 1 and new[j] < cur:
+ j += 1
while True:
- if j < 0:
+ if j == nright:
break
- if i == 0:
- while j >= 0:
+ if i == nleft - 1:
+ while j < nright:
if new[j] == cur:
indexer[j] = i
- elif new[j] < cur and fill_count < lim:
+ elif new[j] > cur and fill_count < lim:
indexer[j] = i
fill_count += 1
- j -= 1
+ j += 1
break
- prev = old[i - 1]
+ next = old[i + 1]
- while j >= 0 and prev < new[j] <= cur:
+ while j < nright and cur <= new[j] < next:
if new[j] == cur:
indexer[j] = i
- elif new[j] < cur and fill_count < lim:
+ elif fill_count < lim:
indexer[j] = i
fill_count += 1
- j -= 1
+ j += 1
fill_count = 0
- i -= 1
- cur = prev
+ i += 1
+ cur = next
return indexer
+
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_int32(ndarray[int32_t] old, ndarray[int32_t] new,
+def backfill_float64(ndarray[float64_t] old, ndarray[float64_t] new,
limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef int32_t cur, prev
+ cdef float64_t cur, prev
cdef int lim, fill_count = 0
nleft = len(old)
@@ -692,11 +847,11 @@ def backfill_int32(ndarray[int32_t] old, ndarray[int32_t] new,
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_int64(ndarray[int64_t] old, ndarray[int64_t] new,
+def backfill_float32(ndarray[float32_t] old, ndarray[float32_t] new,
limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef int64_t cur, prev
+ cdef float32_t cur, prev
cdef int lim, fill_count = 0
nleft = len(old)
@@ -754,11 +909,11 @@ def backfill_int64(ndarray[int64_t] old, ndarray[int64_t] new,
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_bool(ndarray[uint8_t] old, ndarray[uint8_t] new,
+def backfill_object(ndarray[object] old, ndarray[object] new,
limit=None):
cdef Py_ssize_t i, j, nleft, nright
cdef ndarray[int64_t, ndim=1] indexer
- cdef uint8_t cur, prev
+ cdef object cur, prev
cdef int lim, fill_count = 0
nleft = len(old)
@@ -814,192 +969,332 @@ def backfill_bool(ndarray[uint8_t] old, ndarray[uint8_t] new,
return indexer
-
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_inplace_float64(ndarray[float64_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
- cdef Py_ssize_t i, N
- cdef float64_t val
+def backfill_int8(ndarray[int8_t] old, ndarray[int8_t] new,
+ limit=None):
+ cdef Py_ssize_t i, j, nleft, nright
+ cdef ndarray[int64_t, ndim=1] indexer
+ cdef int8_t cur, prev
cdef int lim, fill_count = 0
- N = len(values)
+ nleft = len(old)
+ nright = len(new)
+ indexer = np.empty(nright, dtype=np.int64)
+ indexer.fill(-1)
if limit is None:
- lim = N
+ lim = nright
else:
if limit < 0:
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[0]
- for i in range(N):
- if mask[i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[i] = val
- else:
- fill_count = 0
- val = values[i]
+ if nleft == 0 or nright == 0 or new[0] > old[nleft - 1]:
+ return indexer
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def pad_inplace_object(ndarray[object] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
- cdef Py_ssize_t i, N
- cdef object val
- cdef int lim, fill_count = 0
+ i = nleft - 1
+ j = nright - 1
- N = len(values)
+ cur = old[nleft - 1]
- if limit is None:
- lim = N
- else:
- if limit < 0:
- raise ValueError('Limit must be non-negative')
- lim = limit
+ while j >= 0 and new[j] > cur:
+ j -= 1
- val = values[0]
- for i in range(N):
- if mask[i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[i] = val
- else:
- fill_count = 0
- val = values[i]
+ while True:
+ if j < 0:
+ break
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def pad_inplace_int32(ndarray[int32_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
- cdef Py_ssize_t i, N
- cdef int32_t val
- cdef int lim, fill_count = 0
+ if i == 0:
+ while j >= 0:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+ break
- N = len(values)
+ prev = old[i - 1]
- if limit is None:
- lim = N
- else:
- if limit < 0:
- raise ValueError('Limit must be non-negative')
- lim = limit
+ while j >= 0 and prev < new[j] <= cur:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
- val = values[0]
- for i in range(N):
- if mask[i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[i] = val
- else:
- fill_count = 0
- val = values[i]
+ fill_count = 0
+ i -= 1
+ cur = prev
+
+ return indexer
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_inplace_int64(ndarray[int64_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
- cdef Py_ssize_t i, N
- cdef int64_t val
+def backfill_int16(ndarray[int16_t] old, ndarray[int16_t] new,
+ limit=None):
+ cdef Py_ssize_t i, j, nleft, nright
+ cdef ndarray[int64_t, ndim=1] indexer
+ cdef int16_t cur, prev
cdef int lim, fill_count = 0
- N = len(values)
+ nleft = len(old)
+ nright = len(new)
+ indexer = np.empty(nright, dtype=np.int64)
+ indexer.fill(-1)
if limit is None:
- lim = N
+ lim = nright
else:
if limit < 0:
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[0]
- for i in range(N):
- if mask[i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[i] = val
- else:
- fill_count = 0
- val = values[i]
+ if nleft == 0 or nright == 0 or new[0] > old[nleft - 1]:
+ return indexer
+
+ i = nleft - 1
+ j = nright - 1
+
+ cur = old[nleft - 1]
+
+ while j >= 0 and new[j] > cur:
+ j -= 1
+
+ while True:
+ if j < 0:
+ break
+
+ if i == 0:
+ while j >= 0:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+ break
+
+ prev = old[i - 1]
+
+ while j >= 0 and prev < new[j] <= cur:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+
+ fill_count = 0
+ i -= 1
+ cur = prev
+
+ return indexer
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_inplace_bool(ndarray[uint8_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
- cdef Py_ssize_t i, N
- cdef uint8_t val
+def backfill_int32(ndarray[int32_t] old, ndarray[int32_t] new,
+ limit=None):
+ cdef Py_ssize_t i, j, nleft, nright
+ cdef ndarray[int64_t, ndim=1] indexer
+ cdef int32_t cur, prev
cdef int lim, fill_count = 0
- N = len(values)
+ nleft = len(old)
+ nright = len(new)
+ indexer = np.empty(nright, dtype=np.int64)
+ indexer.fill(-1)
if limit is None:
- lim = N
+ lim = nright
else:
if limit < 0:
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[0]
- for i in range(N):
- if mask[i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[i] = val
- else:
- fill_count = 0
- val = values[i]
+ if nleft == 0 or nright == 0 or new[0] > old[nleft - 1]:
+ return indexer
+
+ i = nleft - 1
+ j = nright - 1
+
+ cur = old[nleft - 1]
+
+ while j >= 0 and new[j] > cur:
+ j -= 1
+ while True:
+ if j < 0:
+ break
+
+ if i == 0:
+ while j >= 0:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+ break
+
+ prev = old[i - 1]
+
+ while j >= 0 and prev < new[j] <= cur:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+
+ fill_count = 0
+ i -= 1
+ cur = prev
+
+ return indexer
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_inplace_float64(ndarray[float64_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
- cdef Py_ssize_t i, N
- cdef float64_t val
+def backfill_int64(ndarray[int64_t] old, ndarray[int64_t] new,
+ limit=None):
+ cdef Py_ssize_t i, j, nleft, nright
+ cdef ndarray[int64_t, ndim=1] indexer
+ cdef int64_t cur, prev
cdef int lim, fill_count = 0
- N = len(values)
+ nleft = len(old)
+ nright = len(new)
+ indexer = np.empty(nright, dtype=np.int64)
+ indexer.fill(-1)
if limit is None:
- lim = N
+ lim = nright
else:
if limit < 0:
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[N - 1]
- for i in range(N - 1, -1 , -1):
- if mask[i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[i] = val
- else:
- fill_count = 0
- val = values[i]
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def backfill_inplace_object(ndarray[object] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
- cdef Py_ssize_t i, N
- cdef object val
- cdef int lim, fill_count = 0
+ if nleft == 0 or nright == 0 or new[0] > old[nleft - 1]:
+ return indexer
+
+ i = nleft - 1
+ j = nright - 1
+
+ cur = old[nleft - 1]
+
+ while j >= 0 and new[j] > cur:
+ j -= 1
+
+ while True:
+ if j < 0:
+ break
+
+ if i == 0:
+ while j >= 0:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+ break
+
+ prev = old[i - 1]
+
+ while j >= 0 and prev < new[j] <= cur:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+
+ fill_count = 0
+ i -= 1
+ cur = prev
+
+ return indexer
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def backfill_bool(ndarray[uint8_t] old, ndarray[uint8_t] new,
+ limit=None):
+ cdef Py_ssize_t i, j, nleft, nright
+ cdef ndarray[int64_t, ndim=1] indexer
+ cdef uint8_t cur, prev
+ cdef int lim, fill_count = 0
+
+ nleft = len(old)
+ nright = len(new)
+ indexer = np.empty(nright, dtype=np.int64)
+ indexer.fill(-1)
+
+ if limit is None:
+ lim = nright
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ if nleft == 0 or nright == 0 or new[0] > old[nleft - 1]:
+ return indexer
+
+ i = nleft - 1
+ j = nright - 1
+
+ cur = old[nleft - 1]
+
+ while j >= 0 and new[j] > cur:
+ j -= 1
+
+ while True:
+ if j < 0:
+ break
+
+ if i == 0:
+ while j >= 0:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+ break
+
+ prev = old[i - 1]
+
+ while j >= 0 and prev < new[j] <= cur:
+ if new[j] == cur:
+ indexer[j] = i
+ elif new[j] < cur and fill_count < lim:
+ indexer[j] = i
+ fill_count += 1
+ j -= 1
+
+ fill_count = 0
+ i -= 1
+ cur = prev
+
+ return indexer
+
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def pad_inplace_float64(ndarray[float64_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef float64_t val
+ cdef int lim, fill_count = 0
N = len(values)
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -1007,8 +1302,8 @@ def backfill_inplace_object(ndarray[object] values,
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[N - 1]
- for i in range(N - 1, -1 , -1):
+ val = values[0]
+ for i in range(N):
if mask[i]:
if fill_count >= lim:
continue
@@ -1017,17 +1312,22 @@ def backfill_inplace_object(ndarray[object] values,
else:
fill_count = 0
val = values[i]
+
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_inplace_int32(ndarray[int32_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
+def pad_inplace_float32(ndarray[float32_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
cdef Py_ssize_t i, N
- cdef int32_t val
+ cdef float32_t val
cdef int lim, fill_count = 0
N = len(values)
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -1035,8 +1335,8 @@ def backfill_inplace_int32(ndarray[int32_t] values,
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[N - 1]
- for i in range(N - 1, -1 , -1):
+ val = values[0]
+ for i in range(N):
if mask[i]:
if fill_count >= lim:
continue
@@ -1045,17 +1345,22 @@ def backfill_inplace_int32(ndarray[int32_t] values,
else:
fill_count = 0
val = values[i]
+
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_inplace_int64(ndarray[int64_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
+def pad_inplace_object(ndarray[object] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
cdef Py_ssize_t i, N
- cdef int64_t val
+ cdef object val
cdef int lim, fill_count = 0
N = len(values)
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -1063,8 +1368,8 @@ def backfill_inplace_int64(ndarray[int64_t] values,
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[N - 1]
- for i in range(N - 1, -1 , -1):
+ val = values[0]
+ for i in range(N):
if mask[i]:
if fill_count >= lim:
continue
@@ -1073,17 +1378,22 @@ def backfill_inplace_int64(ndarray[int64_t] values,
else:
fill_count = 0
val = values[i]
+
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_inplace_bool(ndarray[uint8_t] values,
- ndarray[uint8_t, cast=True] mask,
- limit=None):
+def pad_inplace_int8(ndarray[int8_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
cdef Py_ssize_t i, N
- cdef uint8_t val
+ cdef int8_t val
cdef int lim, fill_count = 0
N = len(values)
+ # GH 2778
+ if N == 0:
+ return
+
if limit is None:
lim = N
else:
@@ -1091,8 +1401,8 @@ def backfill_inplace_bool(ndarray[uint8_t] values,
raise ValueError('Limit must be non-negative')
lim = limit
- val = values[N - 1]
- for i in range(N - 1, -1 , -1):
+ val = values[0]
+ for i in range(N):
if mask[i]:
if fill_count >= lim:
continue
@@ -1104,14 +1414,18 @@ def backfill_inplace_bool(ndarray[uint8_t] values,
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_2d_inplace_float64(ndarray[float64_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef float64_t val
+def pad_inplace_int16(ndarray[int16_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef int16_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1120,28 +1434,31 @@ def pad_2d_inplace_float64(ndarray[float64_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, 0]
- for i in range(N):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
+ val = values[0]
+ for i in range(N):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
+
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_2d_inplace_object(ndarray[object, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef object val
+def pad_inplace_int32(ndarray[int32_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef int32_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1150,28 +1467,31 @@ def pad_2d_inplace_object(ndarray[object, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, 0]
- for i in range(N):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
+ val = values[0]
+ for i in range(N):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
+
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_2d_inplace_int32(ndarray[int32_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef int32_t val
+def pad_inplace_int64(ndarray[int64_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef int64_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1180,28 +1500,31 @@ def pad_2d_inplace_int32(ndarray[int32_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, 0]
- for i in range(N):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def pad_2d_inplace_int64(ndarray[int64_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef int64_t val
+ val = values[0]
+ for i in range(N):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def pad_inplace_bool(ndarray[uint8_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef uint8_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1210,28 +1533,32 @@ def pad_2d_inplace_int64(ndarray[int64_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, 0]
- for i in range(N):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
+ val = values[0]
+ for i in range(N):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
+
+
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_2d_inplace_bool(ndarray[uint8_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef uint8_t val
+def backfill_inplace_float64(ndarray[float64_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef float64_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1240,29 +1567,30 @@ def pad_2d_inplace_bool(ndarray[uint8_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, 0]
- for i in range(N):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
-
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_2d_inplace_float64(ndarray[float64_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef float64_t val
+def backfill_inplace_float32(ndarray[float32_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef float32_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1271,28 +1599,30 @@ def backfill_2d_inplace_float64(ndarray[float64_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, N - 1]
- for i in range(N - 1, -1 , -1):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_2d_inplace_object(ndarray[object, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
+def backfill_inplace_object(ndarray[object] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
cdef object val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1301,28 +1631,30 @@ def backfill_2d_inplace_object(ndarray[object, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, N - 1]
- for i in range(N - 1, -1 , -1):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_2d_inplace_int32(ndarray[int32_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef int32_t val
+def backfill_inplace_int8(ndarray[int8_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef int8_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1331,28 +1663,30 @@ def backfill_2d_inplace_int32(ndarray[int32_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, N - 1]
- for i in range(N - 1, -1 , -1):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_2d_inplace_int64(ndarray[int64_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef int64_t val
+def backfill_inplace_int16(ndarray[int16_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef int16_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1361,28 +1695,30 @@ def backfill_2d_inplace_int64(ndarray[int64_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, N - 1]
- for i in range(N - 1, -1 , -1):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_2d_inplace_bool(ndarray[uint8_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
- limit=None):
- cdef Py_ssize_t i, j, N, K
- cdef uint8_t val
+def backfill_inplace_int32(ndarray[int32_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef int32_t val
cdef int lim, fill_count = 0
- K, N = (<object> values).shape
+ N = len(values)
+
+ # GH 2778
+ if N == 0:
+ return
if limit is None:
lim = N
@@ -1391,533 +1727,640 @@ def backfill_2d_inplace_bool(ndarray[uint8_t, ndim=2] values,
raise ValueError('Limit must be non-negative')
lim = limit
- for j in range(K):
- fill_count = 0
- val = values[j, N - 1]
- for i in range(N - 1, -1 , -1):
- if mask[j, i]:
- if fill_count >= lim:
- continue
- fill_count += 1
- values[j, i] = val
- else:
- fill_count = 0
- val = values[j, i]
-
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
+@cython.boundscheck(False)
@cython.wraparound(False)
-def take_1d_float64(ndarray[float64_t] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
- cdef:
- Py_ssize_t i, n, idx
- ndarray[float64_t] outbuf
- float64_t fv
+def backfill_inplace_int64(ndarray[int64_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef int64_t val
+ cdef int lim, fill_count = 0
- n = len(indexer)
+ N = len(values)
- if out is None:
- outbuf = np.empty(n, dtype=values.dtype)
- else:
- outbuf = out
+ # GH 2778
+ if N == 0:
+ return
- if False and _checknan(fill_value):
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
+ if limit is None:
+ lim = N
else:
- fv = fill_value
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
+@cython.boundscheck(False)
@cython.wraparound(False)
-def take_1d_object(ndarray[object] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
- cdef:
- Py_ssize_t i, n, idx
- ndarray[object] outbuf
- object fv
+def backfill_inplace_bool(ndarray[uint8_t] values,
+ ndarray[uint8_t, cast=True] mask,
+ limit=None):
+ cdef Py_ssize_t i, N
+ cdef uint8_t val
+ cdef int lim, fill_count = 0
- n = len(indexer)
+ N = len(values)
- if out is None:
- outbuf = np.empty(n, dtype=values.dtype)
- else:
- outbuf = out
+ # GH 2778
+ if N == 0:
+ return
- if False and _checknan(fill_value):
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
+ if limit is None:
+ lim = N
else:
- fv = fill_value
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ val = values[N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[i] = val
+ else:
+ fill_count = 0
+ val = values[i]
+@cython.boundscheck(False)
@cython.wraparound(False)
-def take_1d_int32(ndarray[int32_t] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
- cdef:
- Py_ssize_t i, n, idx
- ndarray[int32_t] outbuf
- int32_t fv
+def pad_2d_inplace_float64(ndarray[float64_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef float64_t val
+ cdef int lim, fill_count = 0
- n = len(indexer)
+ K, N = (<object> values).shape
- if out is None:
- outbuf = np.empty(n, dtype=values.dtype)
- else:
- outbuf = out
+ # GH 2778
+ if N == 0:
+ return
- if True and _checknan(fill_value):
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
+ if limit is None:
+ lim = N
else:
- fv = fill_value
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
+@cython.boundscheck(False)
@cython.wraparound(False)
-def take_1d_int64(ndarray[int64_t] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
- cdef:
- Py_ssize_t i, n, idx
- ndarray[int64_t] outbuf
- int64_t fv
+def pad_2d_inplace_float32(ndarray[float32_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef float32_t val
+ cdef int lim, fill_count = 0
- n = len(indexer)
+ K, N = (<object> values).shape
- if out is None:
- outbuf = np.empty(n, dtype=values.dtype)
- else:
- outbuf = out
+ # GH 2778
+ if N == 0:
+ return
- if True and _checknan(fill_value):
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
+ if limit is None:
+ lim = N
else:
- fv = fill_value
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
+@cython.boundscheck(False)
@cython.wraparound(False)
-def take_1d_bool(ndarray[uint8_t] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
- cdef:
- Py_ssize_t i, n, idx
- ndarray[uint8_t] outbuf
- uint8_t fv
+def pad_2d_inplace_object(ndarray[object, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef object val
+ cdef int lim, fill_count = 0
- n = len(indexer)
+ K, N = (<object> values).shape
- if out is None:
- outbuf = np.empty(n, dtype=values.dtype)
- else:
- outbuf = out
+ # GH 2778
+ if N == 0:
+ return
- if True and _checknan(fill_value):
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
+ if limit is None:
+ lim = N
else:
- fv = fill_value
- for i in range(n):
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
-
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_float64(ndarray[float64_t] arr):
- '''
- Returns
- -------
- is_monotonic, is_unique
- '''
- cdef:
- Py_ssize_t i, n
- float64_t prev, cur
- bint is_unique = 1
-
- n = len(arr)
+def pad_2d_inplace_int8(ndarray[int8_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int8_t val
+ cdef int lim, fill_count = 0
- if n < 2:
- return True, True
+ K, N = (<object> values).shape
- prev = arr[0]
- for i in range(1, n):
- cur = arr[i]
- if cur < prev:
- return False, None
- elif cur == prev:
- is_unique = 0
- prev = cur
- return True, is_unique
+ # GH 2778
+ if N == 0:
+ return
+
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_object(ndarray[object] arr):
- '''
- Returns
- -------
- is_monotonic, is_unique
- '''
- cdef:
- Py_ssize_t i, n
- object prev, cur
- bint is_unique = 1
+def pad_2d_inplace_int16(ndarray[int16_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int16_t val
+ cdef int lim, fill_count = 0
- n = len(arr)
+ K, N = (<object> values).shape
- if n < 2:
- return True, True
+ # GH 2778
+ if N == 0:
+ return
- prev = arr[0]
- for i in range(1, n):
- cur = arr[i]
- if cur < prev:
- return False, None
- elif cur == prev:
- is_unique = 0
- prev = cur
- return True, is_unique
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_int32(ndarray[int32_t] arr):
- '''
- Returns
- -------
- is_monotonic, is_unique
- '''
- cdef:
- Py_ssize_t i, n
- int32_t prev, cur
- bint is_unique = 1
+def pad_2d_inplace_int32(ndarray[int32_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int32_t val
+ cdef int lim, fill_count = 0
- n = len(arr)
+ K, N = (<object> values).shape
- if n < 2:
- return True, True
+ # GH 2778
+ if N == 0:
+ return
- prev = arr[0]
- for i in range(1, n):
- cur = arr[i]
- if cur < prev:
- return False, None
- elif cur == prev:
- is_unique = 0
- prev = cur
- return True, is_unique
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_int64(ndarray[int64_t] arr):
- '''
- Returns
- -------
- is_monotonic, is_unique
- '''
- cdef:
- Py_ssize_t i, n
- int64_t prev, cur
- bint is_unique = 1
+def pad_2d_inplace_int64(ndarray[int64_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int64_t val
+ cdef int lim, fill_count = 0
- n = len(arr)
+ K, N = (<object> values).shape
- if n < 2:
- return True, True
+ # GH 2778
+ if N == 0:
+ return
- prev = arr[0]
- for i in range(1, n):
- cur = arr[i]
- if cur < prev:
- return False, None
- elif cur == prev:
- is_unique = 0
- prev = cur
- return True, is_unique
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
@cython.wraparound(False)
-def is_monotonic_bool(ndarray[uint8_t] arr):
- '''
- Returns
- -------
- is_monotonic, is_unique
- '''
- cdef:
- Py_ssize_t i, n
- uint8_t prev, cur
- bint is_unique = 1
+def pad_2d_inplace_bool(ndarray[uint8_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef uint8_t val
+ cdef int lim, fill_count = 0
- n = len(arr)
+ K, N = (<object> values).shape
- if n < 2:
- return True, True
+ # GH 2778
+ if N == 0:
+ return
- prev = arr[0]
- for i in range(1, n):
- cur = arr[i]
- if cur < prev:
- return False, None
- elif cur == prev:
- is_unique = 0
- prev = cur
- return True, is_unique
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+
+ for j in range(K):
+ fill_count = 0
+ val = values[j, 0]
+ for i in range(N):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
-@cython.wraparound(False)
@cython.boundscheck(False)
-def groupby_float64(ndarray[float64_t] index, ndarray labels):
- cdef dict result = {}
- cdef Py_ssize_t i, length
- cdef list members
- cdef object idx, key
+@cython.wraparound(False)
+def backfill_2d_inplace_float64(ndarray[float64_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef float64_t val
+ cdef int lim, fill_count = 0
- length = len(index)
+ K, N = (<object> values).shape
- for i in range(length):
- key = util.get_value_1d(labels, i)
+ # GH 2778
+ if N == 0:
+ return
- if _checknull(key):
- continue
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
- idx = index[i]
- if key in result:
- members = result[key]
- members.append(idx)
- else:
- result[key] = [idx]
-
- return result
-
-@cython.wraparound(False)
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
-def groupby_object(ndarray[object] index, ndarray labels):
- cdef dict result = {}
- cdef Py_ssize_t i, length
- cdef list members
- cdef object idx, key
-
- length = len(index)
-
- for i in range(length):
- key = util.get_value_1d(labels, i)
+@cython.wraparound(False)
+def backfill_2d_inplace_float32(ndarray[float32_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef float32_t val
+ cdef int lim, fill_count = 0
- if _checknull(key):
- continue
+ K, N = (<object> values).shape
- idx = index[i]
- if key in result:
- members = result[key]
- members.append(idx)
- else:
- result[key] = [idx]
+ # GH 2778
+ if N == 0:
+ return
- return result
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
-@cython.wraparound(False)
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
-def groupby_int32(ndarray[int32_t] index, ndarray labels):
- cdef dict result = {}
- cdef Py_ssize_t i, length
- cdef list members
- cdef object idx, key
-
- length = len(index)
-
- for i in range(length):
- key = util.get_value_1d(labels, i)
+@cython.wraparound(False)
+def backfill_2d_inplace_object(ndarray[object, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef object val
+ cdef int lim, fill_count = 0
- if _checknull(key):
- continue
+ K, N = (<object> values).shape
- idx = index[i]
- if key in result:
- members = result[key]
- members.append(idx)
- else:
- result[key] = [idx]
+ # GH 2778
+ if N == 0:
+ return
- return result
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
-@cython.wraparound(False)
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
-def groupby_int64(ndarray[int64_t] index, ndarray labels):
- cdef dict result = {}
- cdef Py_ssize_t i, length
- cdef list members
- cdef object idx, key
-
- length = len(index)
-
- for i in range(length):
- key = util.get_value_1d(labels, i)
+@cython.wraparound(False)
+def backfill_2d_inplace_int8(ndarray[int8_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int8_t val
+ cdef int lim, fill_count = 0
- if _checknull(key):
- continue
+ K, N = (<object> values).shape
- idx = index[i]
- if key in result:
- members = result[key]
- members.append(idx)
- else:
- result[key] = [idx]
+ # GH 2778
+ if N == 0:
+ return
- return result
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
-@cython.wraparound(False)
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
-def groupby_bool(ndarray[uint8_t] index, ndarray labels):
- cdef dict result = {}
- cdef Py_ssize_t i, length
- cdef list members
- cdef object idx, key
-
- length = len(index)
-
- for i in range(length):
- key = util.get_value_1d(labels, i)
-
- if _checknull(key):
- continue
+@cython.wraparound(False)
+def backfill_2d_inplace_int16(ndarray[int16_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int16_t val
+ cdef int lim, fill_count = 0
- idx = index[i]
- if key in result:
- members = result[key]
- members.append(idx)
- else:
- result[key] = [idx]
+ K, N = (<object> values).shape
- return result
+ # GH 2778
+ if N == 0:
+ return
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
-@cython.wraparound(False)
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
-def arrmap_float64(ndarray[float64_t] index, object func):
- cdef Py_ssize_t length = index.shape[0]
- cdef Py_ssize_t i = 0
-
- cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+@cython.wraparound(False)
+def backfill_2d_inplace_int32(ndarray[int32_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int32_t val
+ cdef int lim, fill_count = 0
- from pandas.lib import maybe_convert_objects
+ K, N = (<object> values).shape
- for i in range(length):
- result[i] = func(index[i])
+ # GH 2778
+ if N == 0:
+ return
- return maybe_convert_objects(result)
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
-@cython.wraparound(False)
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
-def arrmap_object(ndarray[object] index, object func):
- cdef Py_ssize_t length = index.shape[0]
- cdef Py_ssize_t i = 0
-
- cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+@cython.wraparound(False)
+def backfill_2d_inplace_int64(ndarray[int64_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef int64_t val
+ cdef int lim, fill_count = 0
- from pandas.lib import maybe_convert_objects
+ K, N = (<object> values).shape
- for i in range(length):
- result[i] = func(index[i])
+ # GH 2778
+ if N == 0:
+ return
- return maybe_convert_objects(result)
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def arrmap_int32(ndarray[int32_t] index, object func):
- cdef Py_ssize_t length = index.shape[0]
- cdef Py_ssize_t i = 0
-
- cdef ndarray[object] result = np.empty(length, dtype=np.object_)
-
- from pandas.lib import maybe_convert_objects
-
- for i in range(length):
- result[i] = func(index[i])
-
- return maybe_convert_objects(result)
-
-@cython.wraparound(False)
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.boundscheck(False)
-def arrmap_int64(ndarray[int64_t] index, object func):
- cdef Py_ssize_t length = index.shape[0]
- cdef Py_ssize_t i = 0
-
- cdef ndarray[object] result = np.empty(length, dtype=np.object_)
-
- from pandas.lib import maybe_convert_objects
-
- for i in range(length):
- result[i] = func(index[i])
-
- return maybe_convert_objects(result)
-
@cython.wraparound(False)
-@cython.boundscheck(False)
-def arrmap_bool(ndarray[uint8_t] index, object func):
- cdef Py_ssize_t length = index.shape[0]
- cdef Py_ssize_t i = 0
-
- cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+def backfill_2d_inplace_bool(ndarray[uint8_t, ndim=2] values,
+ ndarray[uint8_t, ndim=2] mask,
+ limit=None):
+ cdef Py_ssize_t i, j, N, K
+ cdef uint8_t val
+ cdef int lim, fill_count = 0
- from pandas.lib import maybe_convert_objects
+ K, N = (<object> values).shape
- for i in range(length):
- result[i] = func(index[i])
+ # GH 2778
+ if N == 0:
+ return
- return maybe_convert_objects(result)
+ if limit is None:
+ lim = N
+ else:
+ if limit < 0:
+ raise ValueError('Limit must be non-negative')
+ lim = limit
+ for j in range(K):
+ fill_count = 0
+ val = values[j, N - 1]
+ for i in range(N - 1, -1 , -1):
+ if mask[j, i]:
+ if fill_count >= lim:
+ continue
+ fill_count += 1
+ values[j, i] = val
+ else:
+ fill_count = 0
+ val = values[j, i]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis0_float64(ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+def take_1d_float64(ndarray[float64_t] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[float64_t, ndim=2] outbuf
+ Py_ssize_t i, n, idx
+ ndarray[float64_t] outbuf
float64_t fv
n = len(indexer)
- k = values.shape[1]
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
@@ -1925,37 +2368,31 @@ def take_2d_axis0_float64(ndarray[float64_t, ndim=2] values,
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
else:
fv = fill_value
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for j in range(k):
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis0_object(ndarray[object, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+def take_1d_float32(ndarray[float32_t] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[object, ndim=2] outbuf
- object fv
+ Py_ssize_t i, n, idx
+ ndarray[float32_t] outbuf
+ float32_t fv
n = len(indexer)
- k = values.shape[1]
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
@@ -1963,75 +2400,63 @@ def take_2d_axis0_object(ndarray[object, ndim=2] values,
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
else:
fv = fill_value
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for j in range(k):
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis0_int32(ndarray[int32_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+def take_1d_object(ndarray[object] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[int32_t, ndim=2] outbuf
- int32_t fv
+ Py_ssize_t i, n, idx
+ ndarray[object] outbuf
+ object fv
n = len(indexer)
- k = values.shape[1]
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
- if True and _checknan(fill_value):
+ if False and _checknan(fill_value):
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
else:
fv = fill_value
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for j in range(k):
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis0_int64(ndarray[int64_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+def take_1d_int8(ndarray[int8_t] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[int64_t, ndim=2] outbuf
- int64_t fv
+ Py_ssize_t i, n, idx
+ ndarray[int8_t] outbuf
+ int8_t fv
n = len(indexer)
- k = values.shape[1]
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
@@ -2039,37 +2464,31 @@ def take_2d_axis0_int64(ndarray[int64_t, ndim=2] values,
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
else:
fv = fill_value
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for j in range(k):
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis0_bool(ndarray[uint8_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+def take_1d_int16(ndarray[int16_t] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[uint8_t, ndim=2] outbuf
- uint8_t fv
+ Py_ssize_t i, n, idx
+ ndarray[int16_t] outbuf
+ int16_t fv
n = len(indexer)
- k = values.shape[1]
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
@@ -2077,647 +2496,4928 @@ def take_2d_axis0_bool(ndarray[uint8_t, ndim=2] values,
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ outbuf[i] = values[idx]
else:
fv = fill_value
for i in range(n):
idx = indexer[i]
if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for j in range(k):
- outbuf[i, j] = values[idx, j]
-
+ outbuf[i] = values[idx]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis1_float64(ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+def take_1d_int32(ndarray[int32_t] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[float64_t, ndim=2] outbuf
- float64_t fv
+ Py_ssize_t i, n, idx
+ ndarray[int32_t] outbuf
+ int32_t fv
- n = len(values)
- k = len(indexer)
+ n = len(indexer)
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
- if False and _checknan(fill_value):
- for j in range(k):
- idx = indexer[j]
-
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
if idx == -1:
- for i in range(n):
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ outbuf[i] = values[idx]
else:
fv = fill_value
- for j in range(k):
- idx = indexer[j]
-
+ for i in range(n):
+ idx = indexer[i]
if idx == -1:
- for i in range(n):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ outbuf[i] = values[idx]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis1_object(ndarray[object, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
- cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[object, ndim=2] outbuf
- object fv
+def take_1d_int64(ndarray[int64_t] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, n, idx
+ ndarray[int64_t] outbuf
+ int64_t fv
- n = len(values)
- k = len(indexer)
+ n = len(indexer)
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
- if False and _checknan(fill_value):
- for j in range(k):
- idx = indexer[j]
-
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
if idx == -1:
- for i in range(n):
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ outbuf[i] = values[idx]
else:
fv = fill_value
- for j in range(k):
- idx = indexer[j]
-
+ for i in range(n):
+ idx = indexer[i]
if idx == -1:
- for i in range(n):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ outbuf[i] = values[idx]
@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_axis1_int32(ndarray[int32_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+def take_1d_bool(ndarray[uint8_t] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[int32_t, ndim=2] outbuf
- int32_t fv
+ Py_ssize_t i, n, idx
+ ndarray[uint8_t] outbuf
+ uint8_t fv
- n = len(values)
- k = len(indexer)
+ n = len(indexer)
if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
+ outbuf = np.empty(n, dtype=values.dtype)
else:
outbuf = out
if True and _checknan(fill_value):
- for j in range(k):
- idx = indexer[j]
-
+ for i in range(n):
+ idx = indexer[i]
if idx == -1:
- for i in range(n):
- raise ValueError('No NA values allowed')
+ raise ValueError('No NA values allowed')
else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ outbuf[i] = values[idx]
else:
fv = fill_value
- for j in range(k):
- idx = indexer[j]
-
+ for i in range(n):
+ idx = indexer[i]
if idx == -1:
- for i in range(n):
- outbuf[i, j] = fv
+ outbuf[i] = fv
else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ outbuf[i] = values[idx]
+
-@cython.wraparound(False)
@cython.boundscheck(False)
-def take_2d_axis1_int64(ndarray[int64_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+@cython.wraparound(False)
+def is_monotonic_float64(ndarray[float64_t] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[int64_t, ndim=2] outbuf
- int64_t fv
+ Py_ssize_t i, n
+ float64_t prev, cur
+ bint is_unique = 1
- n = len(values)
- k = len(indexer)
+ n = len(arr)
- if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
- else:
- outbuf = out
+ if n < 2:
+ return True, True
- if True and _checknan(fill_value):
- for j in range(k):
- idx = indexer[j]
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def is_monotonic_float32(ndarray[float32_t] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
+ cdef:
+ Py_ssize_t i, n
+ float32_t prev, cur
+ bint is_unique = 1
- if idx == -1:
- for i in range(n):
- raise ValueError('No NA values allowed')
- else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j in range(k):
- idx = indexer[j]
+ n = len(arr)
- if idx == -1:
- for i in range(n):
- outbuf[i, j] = fv
- else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ if n < 2:
+ return True, True
-@cython.wraparound(False)
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
@cython.boundscheck(False)
-def take_2d_axis1_bool(ndarray[uint8_t, ndim=2] values,
- ndarray[int64_t] indexer,
- out=None, fill_value=np.nan):
+@cython.wraparound(False)
+def is_monotonic_object(ndarray[object] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[uint8_t, ndim=2] outbuf
- uint8_t fv
-
- n = len(values)
- k = len(indexer)
+ Py_ssize_t i, n
+ object prev, cur
+ bint is_unique = 1
- if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
- else:
- outbuf = out
+ n = len(arr)
- if True and _checknan(fill_value):
- for j in range(k):
- idx = indexer[j]
+ if n < 2:
+ return True, True
- if idx == -1:
- for i in range(n):
- raise ValueError('No NA values allowed')
- else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j in range(k):
- idx = indexer[j]
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def is_monotonic_int8(ndarray[int8_t] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
+ cdef:
+ Py_ssize_t i, n
+ int8_t prev, cur
+ bint is_unique = 1
- if idx == -1:
- for i in range(n):
- outbuf[i, j] = fv
- else:
- for i in range(n):
- outbuf[i, j] = values[i, idx]
+ n = len(arr)
+ if n < 2:
+ return True, True
-@cython.wraparound(False)
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
@cython.boundscheck(False)
-def take_2d_multi_float64(ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] idx0,
- ndarray[int64_t] idx1,
- out=None, fill_value=np.nan):
+@cython.wraparound(False)
+def is_monotonic_int16(ndarray[int16_t] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[float64_t, ndim=2] outbuf
- float64_t fv
-
- n = len(idx0)
- k = len(idx1)
+ Py_ssize_t i, n
+ int16_t prev, cur
+ bint is_unique = 1
- if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
- else:
- outbuf = out
+ n = len(arr)
+ if n < 2:
+ return True, True
+
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def is_monotonic_int32(ndarray[int32_t] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
+ cdef:
+ Py_ssize_t i, n
+ int32_t prev, cur
+ bint is_unique = 1
+
+ n = len(arr)
+
+ if n < 2:
+ return True, True
+
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def is_monotonic_int64(ndarray[int64_t] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
+ cdef:
+ Py_ssize_t i, n
+ int64_t prev, cur
+ bint is_unique = 1
+
+ n = len(arr)
+
+ if n < 2:
+ return True, True
+
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def is_monotonic_bool(ndarray[uint8_t] arr):
+ '''
+ Returns
+ -------
+ is_monotonic, is_unique
+ '''
+ cdef:
+ Py_ssize_t i, n
+ uint8_t prev, cur
+ bint is_unique = 1
+
+ n = len(arr)
+
+ if n < 2:
+ return True, True
+
+ prev = arr[0]
+ for i in range(1, n):
+ cur = arr[i]
+ if cur < prev:
+ return False, None
+ elif cur == prev:
+ is_unique = 0
+ prev = cur
+ return True, is_unique
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_float64(ndarray[float64_t] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_float32(ndarray[float32_t] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_object(ndarray[object] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_int8(ndarray[int8_t] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_int16(ndarray[int16_t] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_int32(ndarray[int32_t] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_int64(ndarray[int64_t] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def groupby_bool(ndarray[uint8_t] index, ndarray labels):
+ cdef dict result = {}
+ cdef Py_ssize_t i, length
+ cdef list members
+ cdef object idx, key
+
+ length = len(index)
+
+ for i in range(length):
+ key = util.get_value_1d(labels, i)
+
+ if _checknull(key):
+ continue
+
+ idx = index[i]
+ if key in result:
+ members = result[key]
+ members.append(idx)
+ else:
+ result[key] = [idx]
+
+ return result
+
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_float64(ndarray[float64_t] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_float32(ndarray[float32_t] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_object(ndarray[object] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_int8(ndarray[int8_t] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_int16(ndarray[int16_t] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_int32(ndarray[int32_t] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_int64(ndarray[int64_t] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def arrmap_bool(ndarray[uint8_t] index, object func):
+ cdef Py_ssize_t length = index.shape[0]
+ cdef Py_ssize_t i = 0
+
+ cdef ndarray[object] result = np.empty(length, dtype=np.object_)
+
+ from pandas.lib import maybe_convert_objects
+
+ for i in range(length):
+ result[i] = func(index[i])
+
+ return maybe_convert_objects(result)
+
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_float64(ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[float64_t, ndim=2] outbuf
+ float64_t fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if False and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_float32(ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[float32_t, ndim=2] outbuf
+ float32_t fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if False and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_object(ndarray[object, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[object, ndim=2] outbuf
+ object fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if False and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_int8(ndarray[int8_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int8_t, ndim=2] outbuf
+ int8_t fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_int16(ndarray[int16_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int16_t, ndim=2] outbuf
+ int16_t fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_int32(ndarray[int32_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int32_t, ndim=2] outbuf
+ int32_t fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_int64(ndarray[int64_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int64_t, ndim=2] outbuf
+ int64_t fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis0_bool(ndarray[uint8_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[uint8_t, ndim=2] outbuf
+ uint8_t fv
+
+ n = len(indexer)
+ k = values.shape[1]
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ raise ValueError('No NA values allowed')
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = indexer[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ outbuf[i, j] = values[idx, j]
+
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_float64(ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[float64_t, ndim=2] outbuf
+ float64_t fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if False and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_float32(ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[float32_t, ndim=2] outbuf
+ float32_t fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if False and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_object(ndarray[object, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[object, ndim=2] outbuf
+ object fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if False and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_int8(ndarray[int8_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int8_t, ndim=2] outbuf
+ int8_t fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_int16(ndarray[int16_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int16_t, ndim=2] outbuf
+ int16_t fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_int32(ndarray[int32_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int32_t, ndim=2] outbuf
+ int32_t fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_int64(ndarray[int64_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int64_t, ndim=2] outbuf
+ int64_t fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_axis1_bool(ndarray[uint8_t, ndim=2] values,
+ ndarray[int64_t] indexer,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[uint8_t, ndim=2] outbuf
+ uint8_t fv
+
+ n = len(values)
+ k = len(indexer)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+ if True and _checknan(fill_value):
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ raise ValueError('No NA values allowed')
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+ else:
+ fv = fill_value
+ for j in range(k):
+ idx = indexer[j]
+
+ if idx == -1:
+ for i in range(n):
+ outbuf[i, j] = fv
+ else:
+ for i in range(n):
+ outbuf[i, j] = values[i, idx]
+
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_float64(ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[float64_t, ndim=2] outbuf
+ float64_t fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if False and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_float32(ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[float32_t, ndim=2] outbuf
+ float32_t fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if False and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_object(ndarray[object, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[object, ndim=2] outbuf
+ object fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if False and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_int8(ndarray[int8_t, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int8_t, ndim=2] outbuf
+ int8_t fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_int16(ndarray[int16_t, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int16_t, ndim=2] outbuf
+ int16_t fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_int32(ndarray[int32_t, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int32_t, ndim=2] outbuf
+ int32_t fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_int64(ndarray[int64_t, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[int64_t, ndim=2] outbuf
+ int64_t fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def take_2d_multi_bool(ndarray[uint8_t, ndim=2] values,
+ ndarray[int64_t] idx0,
+ ndarray[int64_t] idx1,
+ out=None, fill_value=np.nan):
+ cdef:
+ Py_ssize_t i, j, k, n, idx
+ ndarray[uint8_t, ndim=2] outbuf
+ uint8_t fv
+
+ n = len(idx0)
+ k = len(idx1)
+
+ if out is None:
+ outbuf = np.empty((n, k), dtype=values.dtype)
+ else:
+ outbuf = out
+
+
+ if True and _checknan(fill_value):
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ raise ValueError('No NA values allowed')
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ raise ValueError('No NA values allowed')
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ fv = fill_value
+ for i in range(n):
+ idx = idx0[i]
+ if idx == -1:
+ for j in range(k):
+ outbuf[i, j] = fv
+ else:
+ for j in range(k):
+ if idx1[j] == -1:
+ outbuf[i, j] = fv
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
+
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def diff_2d_float64(ndarray[float64_t, ndim=2] arr,
+ ndarray[float64_t, ndim=2] out,
+ Py_ssize_t periods, int axis):
+ cdef:
+ Py_ssize_t i, j, sx, sy
+
+ sx, sy = (<object> arr).shape
+ if arr.flags.f_contiguous:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for j in range(sy):
+ for i in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for j in range(start, stop):
+ for i in range(sx):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+ else:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for i in range(start, stop):
+ for j in range(sy):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for i in range(sx):
+ for j in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def diff_2d_float32(ndarray[float32_t, ndim=2] arr,
+ ndarray[float32_t, ndim=2] out,
+ Py_ssize_t periods, int axis):
+ cdef:
+ Py_ssize_t i, j, sx, sy
+
+ sx, sy = (<object> arr).shape
+ if arr.flags.f_contiguous:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for j in range(sy):
+ for i in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for j in range(start, stop):
+ for i in range(sx):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+ else:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for i in range(start, stop):
+ for j in range(sy):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for i in range(sx):
+ for j in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def diff_2d_int8(ndarray[int8_t, ndim=2] arr,
+ ndarray[float32_t, ndim=2] out,
+ Py_ssize_t periods, int axis):
+ cdef:
+ Py_ssize_t i, j, sx, sy
+
+ sx, sy = (<object> arr).shape
+ if arr.flags.f_contiguous:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for j in range(sy):
+ for i in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for j in range(start, stop):
+ for i in range(sx):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+ else:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for i in range(start, stop):
+ for j in range(sy):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for i in range(sx):
+ for j in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def diff_2d_int16(ndarray[int16_t, ndim=2] arr,
+ ndarray[float32_t, ndim=2] out,
+ Py_ssize_t periods, int axis):
+ cdef:
+ Py_ssize_t i, j, sx, sy
+
+ sx, sy = (<object> arr).shape
+ if arr.flags.f_contiguous:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for j in range(sy):
+ for i in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for j in range(start, stop):
+ for i in range(sx):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+ else:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for i in range(start, stop):
+ for j in range(sy):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for i in range(sx):
+ for j in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def diff_2d_int32(ndarray[int32_t, ndim=2] arr,
+ ndarray[float64_t, ndim=2] out,
+ Py_ssize_t periods, int axis):
+ cdef:
+ Py_ssize_t i, j, sx, sy
+
+ sx, sy = (<object> arr).shape
+ if arr.flags.f_contiguous:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for j in range(sy):
+ for i in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for j in range(start, stop):
+ for i in range(sx):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+ else:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for i in range(start, stop):
+ for j in range(sy):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for i in range(sx):
+ for j in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def diff_2d_int64(ndarray[int64_t, ndim=2] arr,
+ ndarray[float64_t, ndim=2] out,
+ Py_ssize_t periods, int axis):
+ cdef:
+ Py_ssize_t i, j, sx, sy
+
+ sx, sy = (<object> arr).shape
+ if arr.flags.f_contiguous:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for j in range(sy):
+ for i in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for j in range(start, stop):
+ for i in range(sx):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+ else:
+ if axis == 0:
+ if periods >= 0:
+ start, stop = periods, sx
+ else:
+ start, stop = 0, sx + periods
+ for i in range(start, stop):
+ for j in range(sy):
+ out[i, j] = arr[i, j] - arr[i - periods, j]
+ else:
+ if periods >= 0:
+ start, stop = periods, sy
+ else:
+ start, stop = 0, sy + periods
+ for i in range(sx):
+ for j in range(start, stop):
+ out[i, j] = arr[i, j] - arr[i, j - periods]
+
+@cython.wraparound(False)
+@cython.wraparound(False)
+def group_last_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, count
+ ndarray[float64_t, ndim=2] resx
+ ndarray[int64_t, ndim=2] nobs
+
+ nobs = np.zeros((<object> out).shape, dtype=np.int64)
+ resx = np.empty_like(out)
+
+ N, K = (<object> values).shape
+
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ resx[lab, j] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+@cython.wraparound(False)
+@cython.wraparound(False)
+def group_last_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, count
+ ndarray[float32_t, ndim=2] resx
+ ndarray[int64_t, ndim=2] nobs
+
+ nobs = np.zeros((<object> out).shape, dtype=np.int64)
+ resx = np.empty_like(out)
+
+ N, K = (<object> values).shape
+
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ resx[lab, j] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+
+@cython.wraparound(False)
+@cython.wraparound(False)
+def group_last_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, count
+ ndarray[float64_t, ndim=2] resx, nobs
+
+ nobs = np.zeros_like(out)
+ resx = np.empty_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ resx[b, j] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+@cython.wraparound(False)
+@cython.wraparound(False)
+def group_last_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, count
+ ndarray[float32_t, ndim=2] resx, nobs
+
+ nobs = np.zeros_like(out)
+ resx = np.empty_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ resx[b, j] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_nth_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels, int64_t rank):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, count
+ ndarray[float64_t, ndim=2] resx
+ ndarray[int64_t, ndim=2] nobs
+
+ nobs = np.zeros((<object> out).shape, dtype=np.int64)
+ resx = np.empty_like(out)
+
+ N, K = (<object> values).shape
+
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if nobs[lab, j] == rank:
+ resx[lab, j] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_nth_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels, int64_t rank):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, count
+ ndarray[float32_t, ndim=2] resx
+ ndarray[int64_t, ndim=2] nobs
+
+ nobs = np.zeros((<object> out).shape, dtype=np.int64)
+ resx = np.empty_like(out)
+
+ N, K = (<object> values).shape
+
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if nobs[lab, j] == rank:
+ resx[lab, j] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_nth_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins, int64_t rank):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, count
+ ndarray[float64_t, ndim=2] resx, nobs
+
+ nobs = np.zeros_like(out)
+ resx = np.empty_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if nobs[b, j] == rank:
+ resx[b, j] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_nth_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins, int64_t rank):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, count
+ ndarray[float32_t, ndim=2] resx, nobs
+
+ nobs = np.zeros_like(out)
+ resx = np.empty_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if nobs[b, j] == rank:
+ resx[b, j] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = resx[i, j]
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_add_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, count
+ ndarray[float64_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_add_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, count
+ ndarray[float32_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j]
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_add_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b, nbins
+ float64_t val, count
+ ndarray[float64_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_add_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b, nbins
+ float32_t val, count
+ ndarray[float32_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j]
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_prod_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, count
+ ndarray[float64_t, ndim=2] prodx, nobs
+
+ nobs = np.zeros_like(out)
+ prodx = np.ones_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ prodx[lab, j] *= val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ prodx[lab, 0] *= val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = prodx[i, j]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_prod_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, count
+ ndarray[float32_t, ndim=2] prodx, nobs
+
+ nobs = np.zeros_like(out)
+ prodx = np.ones_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ prodx[lab, j] *= val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ prodx[lab, 0] *= val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = prodx[i, j]
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_prod_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, count
+ ndarray[float64_t, ndim=2] prodx, nobs
+
+ nobs = np.zeros_like(out)
+ prodx = np.ones_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ prodx[b, j] *= val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ prodx[b, 0] *= val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = prodx[i, j]
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_prod_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, count
+ ndarray[float32_t, ndim=2] prodx, nobs
+
+ nobs = np.zeros_like(out)
+ prodx = np.ones_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ prodx[b, j] *= val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ prodx[b, 0] *= val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = prodx[i, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_var_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, ct
+ ndarray[float64_t, ndim=2] nobs, sumx, sumxx
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+ sumxx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ sumxx[lab, j] += val * val
+ else:
+ for i in range(N):
+
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+ sumxx[lab, 0] += val * val
+
+
+ for i in range(len(counts)):
+ for j in range(K):
+ ct = nobs[i, j]
+ if ct < 2:
+ out[i, j] = nan
+ else:
+ out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
+ (ct * ct - ct))
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_var_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, ct
+ ndarray[float32_t, ndim=2] nobs, sumx, sumxx
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+ sumxx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ sumxx[lab, j] += val * val
+ else:
+ for i in range(N):
+
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+ sumxx[lab, 0] += val * val
+
+
+ for i in range(len(counts)):
+ for j in range(K):
+ ct = nobs[i, j]
+ if ct < 2:
+ out[i, j] = nan
+ else:
+ out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
+ (ct * ct - ct))
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_var_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, ct
+ ndarray[float64_t, ndim=2] nobs, sumx, sumxx
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+ sumxx = np.zeros_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ sumxx[b, j] += val * val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+ sumxx[b, 0] += val * val
+
+ for i in range(ngroups):
+ for j in range(K):
+ ct = nobs[i, j]
+ if ct < 2:
+ out[i, j] = nan
+ else:
+ out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
+ (ct * ct - ct))
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_var_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, ct
+ ndarray[float32_t, ndim=2] nobs, sumx, sumxx
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+ sumxx = np.zeros_like(out)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ sumxx[b, j] += val * val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+ sumxx[b, 0] += val * val
+
+ for i in range(ngroups):
+ for j in range(K):
+ ct = nobs[i, j]
+ if ct < 2:
+ out[i, j] = nan
+ else:
+ out[i, j] = ((ct * sumxx[i, j] - sumx[i, j] * sumx[i, j]) /
+ (ct * ct - ct))
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_mean_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, count
+ ndarray[float64_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ count = nobs[i, j]
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j] / count
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_mean_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, count
+ ndarray[float32_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ sumx[lab, 0] += val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ count = nobs[i, j]
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j] / count
+
+
+def group_mean_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, count
+ ndarray[float64_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+
+ for i in range(ngroups):
+ for j in range(K):
+ count = nobs[i, j]
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j] / count
+
+def group_mean_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, count
+ ndarray[float32_t, ndim=2] sumx, nobs
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object> values).shape
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ sumx[b, j] += val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ sumx[b, 0] += val
+
+ for i in range(ngroups):
+ for j in range(K):
+ count = nobs[i, j]
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = sumx[i, j] / count
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_min_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, count
+ ndarray[float64_t, ndim=2] minx, nobs
+
+ nobs = np.zeros_like(out)
+
+ minx = np.empty_like(out)
+ minx.fill(np.inf)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if val < minx[lab, j]:
+ minx[lab, j] = val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ if val < minx[lab, 0]:
+ minx[lab, 0] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = minx[i, j]
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_min_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, count
+ ndarray[float32_t, ndim=2] minx, nobs
+
+ nobs = np.zeros_like(out)
+
+ minx = np.empty_like(out)
+ minx.fill(np.inf)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if val < minx[lab, j]:
+ minx[lab, j] = val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ if val < minx[lab, 0]:
+ minx[lab, 0] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = minx[i, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_min_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, count
+ ndarray[float64_t, ndim=2] minx, nobs
+
+ nobs = np.zeros_like(out)
+
+ minx = np.empty_like(out)
+ minx.fill(np.inf)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if val < minx[b, j]:
+ minx[b, j] = val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ if val < minx[b, 0]:
+ minx[b, 0] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = minx[i, j]
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_min_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, count
+ ndarray[float32_t, ndim=2] minx, nobs
+
+ nobs = np.zeros_like(out)
+
+ minx = np.empty_like(out)
+ minx.fill(np.inf)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if val < minx[b, j]:
+ minx[b, j] = val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ if val < minx[b, 0]:
+ minx[b, 0] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = minx[i, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_max_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float64_t val, count
+ ndarray[float64_t, ndim=2] maxx, nobs
+
+ nobs = np.zeros_like(out)
+
+ maxx = np.empty_like(out)
+ maxx.fill(-np.inf)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if val > maxx[lab, j]:
+ maxx[lab, j] = val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ if val > maxx[lab, 0]:
+ maxx[lab, 0] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = maxx[i, j]
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_max_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] labels):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, lab
+ float32_t val, count
+ ndarray[float32_t, ndim=2] maxx, nobs
+
+ nobs = np.zeros_like(out)
+
+ maxx = np.empty_like(out)
+ maxx.fill(-np.inf)
+
+ N, K = (<object> values).shape
+
+ if K > 1:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ if val > maxx[lab, j]:
+ maxx[lab, j] = val
+ else:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[lab, 0] += 1
+ if val > maxx[lab, 0]:
+ maxx[lab, 0] = val
+
+ for i in range(len(counts)):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = maxx[i, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_max_bin_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, count
+ ndarray[float64_t, ndim=2] maxx, nobs
+
+ nobs = np.zeros_like(out)
+ maxx = np.empty_like(out)
+ maxx.fill(-np.inf)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if val > maxx[b, j]:
+ maxx[b, j] = val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ if val > maxx[b, 0]:
+ maxx[b, 0] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = maxx[i, j]
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_max_bin_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, count
+ ndarray[float32_t, ndim=2] maxx, nobs
+
+ nobs = np.zeros_like(out)
+ maxx = np.empty_like(out)
+ maxx.fill(-np.inf)
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ b = 0
+ if K > 1:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[b, j] += 1
+ if val > maxx[b, j]:
+ maxx[b, j] = val
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ b += 1
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ nobs[b, 0] += 1
+ if val > maxx[b, 0]:
+ maxx[b, 0] = val
+
+ for i in range(ngroups):
+ for j in range(K):
+ if nobs[i, j] == 0:
+ out[i, j] = nan
+ else:
+ out[i, j] = maxx[i, j]
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_ohlc_float64(ndarray[float64_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float64_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float64_t val, count
+ float64_t vopen, vhigh, vlow, vclose, NA
+ bint got_first = 0
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ if out.shape[1] != 4:
+ raise ValueError('Output array must have 4 columns')
+
+ NA = np.nan
+
+ b = 0
+ if K > 1:
+ raise NotImplementedError
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ if not got_first:
+ out[b, 0] = NA
+ out[b, 1] = NA
+ out[b, 2] = NA
+ out[b, 3] = NA
+ else:
+ out[b, 0] = vopen
+ out[b, 1] = vhigh
+ out[b, 2] = vlow
+ out[b, 3] = vclose
+ b += 1
+ got_first = 0
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ if not got_first:
+ got_first = 1
+ vopen = val
+ vlow = val
+ vhigh = val
+ else:
+ if val < vlow:
+ vlow = val
+ if val > vhigh:
+ vhigh = val
+ vclose = val
+
+ if not got_first:
+ out[b, 0] = NA
+ out[b, 1] = NA
+ out[b, 2] = NA
+ out[b, 3] = NA
+ else:
+ out[b, 0] = vopen
+ out[b, 1] = vhigh
+ out[b, 2] = vlow
+ out[b, 3] = vclose
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def group_ohlc_float32(ndarray[float32_t, ndim=2] out,
+ ndarray[int64_t] counts,
+ ndarray[float32_t, ndim=2] values,
+ ndarray[int64_t] bins):
+ '''
+ Only aggregates on axis=0
+ '''
+ cdef:
+ Py_ssize_t i, j, N, K, ngroups, b
+ float32_t val, count
+ float32_t vopen, vhigh, vlow, vclose, NA
+ bint got_first = 0
+
+ if bins[len(bins) - 1] == len(values):
+ ngroups = len(bins)
+ else:
+ ngroups = len(bins) + 1
+
+ N, K = (<object> values).shape
+
+ if out.shape[1] != 4:
+ raise ValueError('Output array must have 4 columns')
+
+ NA = np.nan
+
+ b = 0
+ if K > 1:
+ raise NotImplementedError
+ else:
+ for i in range(N):
+ while b < ngroups - 1 and i >= bins[b]:
+ if not got_first:
+ out[b, 0] = NA
+ out[b, 1] = NA
+ out[b, 2] = NA
+ out[b, 3] = NA
+ else:
+ out[b, 0] = vopen
+ out[b, 1] = vhigh
+ out[b, 2] = vlow
+ out[b, 3] = vclose
+ b += 1
+ got_first = 0
+
+ counts[b] += 1
+ val = values[i, 0]
+
+ # not nan
+ if val == val:
+ if not got_first:
+ got_first = 1
+ vopen = val
+ vlow = val
+ vhigh = val
+ else:
+ if val < vlow:
+ vlow = val
+ if val > vhigh:
+ vhigh = val
+ vclose = val
+
+ if not got_first:
+ out[b, 0] = NA
+ out[b, 1] = NA
+ out[b, 2] = NA
+ out[b, 3] = NA
+ else:
+ out[b, 0] = vopen
+ out[b, 1] = vhigh
+ out[b, 2] = vlow
+ out[b, 3] = vclose
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique_float64(ndarray[float64_t] left,
+ ndarray[float64_t] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ float64_t lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique_float32(ndarray[float32_t] left,
+ ndarray[float32_t] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ float32_t lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique_object(ndarray[object] left,
+ ndarray[object] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ object lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique_int8(ndarray[int8_t] left,
+ ndarray[int8_t] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ int8_t lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique_int16(ndarray[int16_t] left,
+ ndarray[int16_t] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ int16_t lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique_int32(ndarray[int32_t] left,
+ ndarray[int32_t] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ int32_t lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique_int64(ndarray[int64_t] left,
+ ndarray[int64_t] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ int64_t lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+
+
+def left_join_indexer_float64(ndarray[float64_t] left,
+ ndarray[float64_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
+ cdef:
+ Py_ssize_t i, j, k, nright, nleft, count
+ float64_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[float64_t] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
- if False and _checknan(fill_value):
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- raise ValueError('No NA values allowed')
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
else:
- for j in range(k):
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
+ j += 1
+
+ # do it again now that result size is known
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.float64)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
+ else:
+ j += 1
+
+ return result, lindexer, rindexer
+
+
+def left_join_indexer_float32(ndarray[float32_t] left,
+ ndarray[float32_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
+ cdef:
+ Py_ssize_t i, j, k, nright, nleft, count
+ float32_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[float32_t] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
+ else:
+ j += 1
+
+ # do it again now that result size is known
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.float32)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
+ else:
+ j += 1
+
+ return result, lindexer, rindexer
+
+
+def left_join_indexer_object(ndarray[object] left,
+ ndarray[object] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
+ cdef:
+ Py_ssize_t i, j, k, nright, nleft, count
+ object lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[object] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
+ else:
+ j += 1
+
+ # do it again now that result size is known
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=object)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
else:
- for j in range(k):
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ j += 1
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_multi_object(ndarray[object, ndim=2] values,
- ndarray[int64_t] idx0,
- ndarray[int64_t] idx1,
- out=None, fill_value=np.nan):
+ return result, lindexer, rindexer
+
+
+def left_join_indexer_int8(ndarray[int8_t] left,
+ ndarray[int8_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[object, ndim=2] outbuf
- object fv
+ Py_ssize_t i, j, k, nright, nleft, count
+ int8_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[int8_t] result
- n = len(idx0)
- k = len(idx1)
+ nleft = len(left)
+ nright = len(right)
- if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
- else:
- outbuf = out
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
+ lval = left[i]
+ rval = right[j]
- if False and _checknan(fill_value):
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- raise ValueError('No NA values allowed')
- else:
- for j in range(k):
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
else:
- for j in range(k):
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ j += 1
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_multi_int32(ndarray[int32_t, ndim=2] values,
- ndarray[int64_t] idx0,
- ndarray[int64_t] idx1,
- out=None, fill_value=np.nan):
- cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[int32_t, ndim=2] outbuf
- int32_t fv
+ # do it again now that result size is known
- n = len(idx0)
- k = len(idx1)
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.int8)
- if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
- else:
- outbuf = out
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
+ lval = left[i]
+ rval = right[j]
- if True and _checknan(fill_value):
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- raise ValueError('No NA values allowed')
- else:
- for j in range(k):
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
else:
- for j in range(k):
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ j += 1
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_multi_int64(ndarray[int64_t, ndim=2] values,
- ndarray[int64_t] idx0,
- ndarray[int64_t] idx1,
- out=None, fill_value=np.nan):
+ return result, lindexer, rindexer
+
+
+def left_join_indexer_int16(ndarray[int16_t] left,
+ ndarray[int16_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[int64_t, ndim=2] outbuf
- int64_t fv
+ Py_ssize_t i, j, k, nright, nleft, count
+ int16_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[int16_t] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
+ else:
+ j += 1
- n = len(idx0)
- k = len(idx1)
+ # do it again now that result size is known
- if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
- else:
- outbuf = out
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.int16)
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
- if True and _checknan(fill_value):
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- raise ValueError('No NA values allowed')
- else:
- for j in range(k):
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
else:
- for j in range(k):
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ j += 1
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def take_2d_multi_bool(ndarray[uint8_t, ndim=2] values,
- ndarray[int64_t] idx0,
- ndarray[int64_t] idx1,
- out=None, fill_value=np.nan):
+ return result, lindexer, rindexer
+
+
+def left_join_indexer_int32(ndarray[int32_t] left,
+ ndarray[int32_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
cdef:
- Py_ssize_t i, j, k, n, idx
- ndarray[uint8_t, ndim=2] outbuf
- uint8_t fv
+ Py_ssize_t i, j, k, nright, nleft, count
+ int32_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[int32_t] result
- n = len(idx0)
- k = len(idx1)
+ nleft = len(left)
+ nright = len(right)
- if out is None:
- outbuf = np.empty((n, k), dtype=values.dtype)
- else:
- outbuf = out
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
+ lval = left[i]
+ rval = right[j]
- if True and _checknan(fill_value):
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- raise ValueError('No NA values allowed')
- else:
- for j in range(k):
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i in range(n):
- idx = idx0[i]
- if idx == -1:
- for j in range(k):
- outbuf[i, j] = fv
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
else:
- for j in range(k):
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ j += 1
+ # do it again now that result size is known
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def left_join_indexer_unique_float64(ndarray[float64_t] left,
- ndarray[float64_t] right):
- cdef:
- Py_ssize_t i, j, nleft, nright
- ndarray[int64_t] indexer
- float64_t lval, rval
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.int32)
i = 0
j = 0
- nleft = len(left)
- nright = len(right)
-
- indexer = np.empty(nleft, dtype=np.int64)
- while True:
- if i == nleft:
- break
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
- if j == nright:
- indexer[i] = -1
- i += 1
- continue
+ lval = left[i]
+ rval = right[j]
- rval = right[j]
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
+ else:
+ j += 1
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
+ return result, lindexer, rindexer
- if left[i] == right[j]:
- indexer[i] = j
- i += 1
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
- j += 1
- elif left[i] > rval:
- indexer[i] = -1
- j += 1
- else:
- indexer[i] = -1
- i += 1
- return indexer
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def left_join_indexer_unique_object(ndarray[object] left,
- ndarray[object] right):
+def left_join_indexer_int64(ndarray[int64_t] left,
+ ndarray[int64_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
cdef:
- Py_ssize_t i, j, nleft, nright
- ndarray[int64_t] indexer
- object lval, rval
-
- i = 0
- j = 0
+ Py_ssize_t i, j, k, nright, nleft, count
+ int64_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[int64_t] result
+
nleft = len(left)
nright = len(right)
- indexer = np.empty(nleft, dtype=np.int64)
- while True:
- if i == nleft:
- break
-
- if j == nright:
- indexer[i] = -1
- i += 1
- continue
-
- rval = right[j]
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
+ lval = left[i]
+ rval = right[j]
- if left[i] == right[j]:
- indexer[i] = j
- i += 1
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
i += 1
- j += 1
- elif left[i] > rval:
- indexer[i] = -1
- j += 1
- else:
- indexer[i] = -1
- i += 1
- return indexer
+ else:
+ j += 1
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def left_join_indexer_unique_int32(ndarray[int32_t] left,
- ndarray[int32_t] right):
- cdef:
- Py_ssize_t i, j, nleft, nright
- ndarray[int64_t] indexer
- int32_t lval, rval
+ # do it again now that result size is known
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.int64)
i = 0
j = 0
- nleft = len(left)
- nright = len(right)
-
- indexer = np.empty(nleft, dtype=np.int64)
- while True:
- if i == nleft:
- break
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
- if j == nright:
- indexer[i] = -1
- i += 1
- continue
+ lval = left[i]
+ rval = right[j]
- rval = right[j]
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
+ else:
+ j += 1
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
+ return result, lindexer, rindexer
- if left[i] == right[j]:
- indexer[i] = j
- i += 1
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
- j += 1
- elif left[i] > rval:
- indexer[i] = -1
- j += 1
- else:
- indexer[i] = -1
- i += 1
- return indexer
@cython.wraparound(False)
@cython.boundscheck(False)
-def left_join_indexer_unique_int64(ndarray[int64_t] left,
- ndarray[int64_t] right):
+def outer_join_indexer_float64(ndarray[float64_t] left,
+ ndarray[float64_t] right):
cdef:
- Py_ssize_t i, j, nleft, nright
- ndarray[int64_t] indexer
- int64_t lval, rval
+ Py_ssize_t i, j, nright, nleft, count
+ float64_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[float64_t] result
- i = 0
- j = 0
nleft = len(left)
nright = len(right)
- indexer = np.empty(nleft, dtype=np.int64)
- while True:
- if i == nleft:
- break
+ i = 0
+ j = 0
+ count = 0
+ if nleft == 0:
+ count = nright
+ elif nright == 0:
+ count = nleft
+ else:
+ while True:
+ if i == nleft:
+ count += nright - j
+ break
+ if j == nright:
+ count += nleft - i
+ break
- if j == nright:
- indexer[i] = -1
- i += 1
- continue
+ lval = left[i]
+ rval = right[j]
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
+ else:
+ count += 1
+ j += 1
- rval = right[j]
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.float64)
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
+ # do it again, but populate the indexers / result
- if left[i] == right[j]:
- indexer[i] = j
- i += 1
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
- j += 1
- elif left[i] > rval:
- indexer[i] = -1
- j += 1
- else:
- indexer[i] = -1
- i += 1
- return indexer
+ i = 0
+ j = 0
+ count = 0
+ if nleft == 0:
+ for j in range(nright):
+ lindexer[j] = -1
+ rindexer[j] = j
+ result[j] = right[j]
+ elif nright == 0:
+ for i in range(nright):
+ lindexer[i] = i
+ rindexer[i] = -1
+ result[i] = left[i]
+ else:
+ while True:
+ if i == nleft:
+ while j < nright:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = right[j]
+ count += 1
+ j += 1
+ break
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
+ break
+
+ lval = left[i]
+ rval = right[j]
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = lval
+ count += 1
+ i += 1
+ else:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = rval
+ count += 1
+ j += 1
+ return result, lindexer, rindexer
-def left_join_indexer_float64(ndarray[float64_t] left,
- ndarray[float64_t] right):
- '''
- Two-pass algorithm for monotonic indexes. Handles many-to-one merges
- '''
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def outer_join_indexer_float32(ndarray[float32_t] left,
+ ndarray[float32_t] right):
cdef:
- Py_ssize_t i, j, k, nright, nleft, count
- float64_t lval, rval
+ Py_ssize_t i, j, nright, nleft, count
+ float32_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[float64_t] result
+ ndarray[float32_t] result
nleft = len(left)
nright = len(right)
@@ -2725,15 +7425,21 @@ def left_join_indexer_float64(ndarray[float64_t] left,
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ count = nright
+ elif nright == 0:
+ count = nleft
+ else:
+ while True:
+ if i == nleft:
+ count += nright - j
+ break
if j == nright:
count += nleft - i
break
lval = left[i]
rval = right[j]
-
if lval == rval:
count += 1
if i < nleft - 1:
@@ -2754,26 +7460,45 @@ def left_join_indexer_float64(ndarray[float64_t] left,
count += 1
i += 1
else:
+ count += 1
j += 1
- # do it again now that result size is known
-
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=np.float64)
+ result = np.empty(count, dtype=np.float32)
+
+ # do it again, but populate the indexers / result
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ for j in range(nright):
+ lindexer[j] = -1
+ rindexer[j] = j
+ result[j] = right[j]
+ elif nright == 0:
+ for i in range(nright):
+ lindexer[i] = i
+ rindexer[i] = -1
+ result[i] = left[i]
+ else:
+ while True:
+ if i == nleft:
+ while j < nright:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = right[j]
+ count += 1
+ j += 1
+ break
if j == nright:
while i < nleft:
lindexer[count] = i
rindexer[count] = -1
result[count] = left[i]
- i += 1
count += 1
+ i += 1
break
lval = left[i]
@@ -2801,22 +7526,24 @@ def left_join_indexer_float64(ndarray[float64_t] left,
elif lval < rval:
lindexer[count] = i
rindexer[count] = -1
- result[count] = left[i]
+ result[count] = lval
count += 1
i += 1
else:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = rval
+ count += 1
j += 1
return result, lindexer, rindexer
-
-def left_join_indexer_object(ndarray[object] left,
- ndarray[object] right):
- '''
- Two-pass algorithm for monotonic indexes. Handles many-to-one merges
- '''
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def outer_join_indexer_object(ndarray[object] left,
+ ndarray[object] right):
cdef:
- Py_ssize_t i, j, k, nright, nleft, count
+ Py_ssize_t i, j, nright, nleft, count
object lval, rval
ndarray[int64_t] lindexer, rindexer
ndarray[object] result
@@ -2827,15 +7554,21 @@ def left_join_indexer_object(ndarray[object] left,
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ count = nright
+ elif nright == 0:
+ count = nleft
+ else:
+ while True:
+ if i == nleft:
+ count += nright - j
+ break
if j == nright:
count += nleft - i
break
lval = left[i]
rval = right[j]
-
if lval == rval:
count += 1
if i < nleft - 1:
@@ -2856,26 +7589,45 @@ def left_join_indexer_object(ndarray[object] left,
count += 1
i += 1
else:
+ count += 1
j += 1
- # do it again now that result size is known
-
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
result = np.empty(count, dtype=object)
+ # do it again, but populate the indexers / result
+
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ for j in range(nright):
+ lindexer[j] = -1
+ rindexer[j] = j
+ result[j] = right[j]
+ elif nright == 0:
+ for i in range(nright):
+ lindexer[i] = i
+ rindexer[i] = -1
+ result[i] = left[i]
+ else:
+ while True:
+ if i == nleft:
+ while j < nright:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = right[j]
+ count += 1
+ j += 1
+ break
if j == nright:
while i < nleft:
lindexer[count] = i
rindexer[count] = -1
result[count] = left[i]
- i += 1
count += 1
+ i += 1
break
lval = left[i]
@@ -2903,25 +7655,27 @@ def left_join_indexer_object(ndarray[object] left,
elif lval < rval:
lindexer[count] = i
rindexer[count] = -1
- result[count] = left[i]
+ result[count] = lval
count += 1
i += 1
else:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = rval
+ count += 1
j += 1
return result, lindexer, rindexer
-
-def left_join_indexer_int32(ndarray[int32_t] left,
- ndarray[int32_t] right):
- '''
- Two-pass algorithm for monotonic indexes. Handles many-to-one merges
- '''
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def outer_join_indexer_int8(ndarray[int8_t] left,
+ ndarray[int8_t] right):
cdef:
- Py_ssize_t i, j, k, nright, nleft, count
- int32_t lval, rval
+ Py_ssize_t i, j, nright, nleft, count
+ int8_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[int32_t] result
+ ndarray[int8_t] result
nleft = len(left)
nright = len(right)
@@ -2929,15 +7683,21 @@ def left_join_indexer_int32(ndarray[int32_t] left,
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ count = nright
+ elif nright == 0:
+ count = nleft
+ else:
+ while True:
+ if i == nleft:
+ count += nright - j
+ break
if j == nright:
count += nleft - i
break
lval = left[i]
rval = right[j]
-
if lval == rval:
count += 1
if i < nleft - 1:
@@ -2958,26 +7718,45 @@ def left_join_indexer_int32(ndarray[int32_t] left,
count += 1
i += 1
else:
+ count += 1
j += 1
- # do it again now that result size is known
-
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=np.int32)
+ result = np.empty(count, dtype=np.int8)
+
+ # do it again, but populate the indexers / result
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ for j in range(nright):
+ lindexer[j] = -1
+ rindexer[j] = j
+ result[j] = right[j]
+ elif nright == 0:
+ for i in range(nright):
+ lindexer[i] = i
+ rindexer[i] = -1
+ result[i] = left[i]
+ else:
+ while True:
+ if i == nleft:
+ while j < nright:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = right[j]
+ count += 1
+ j += 1
+ break
if j == nright:
while i < nleft:
lindexer[count] = i
rindexer[count] = -1
result[count] = left[i]
- i += 1
count += 1
+ i += 1
break
lval = left[i]
@@ -3005,25 +7784,27 @@ def left_join_indexer_int32(ndarray[int32_t] left,
elif lval < rval:
lindexer[count] = i
rindexer[count] = -1
- result[count] = left[i]
+ result[count] = lval
count += 1
i += 1
else:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = rval
+ count += 1
j += 1
return result, lindexer, rindexer
-
-def left_join_indexer_int64(ndarray[int64_t] left,
- ndarray[int64_t] right):
- '''
- Two-pass algorithm for monotonic indexes. Handles many-to-one merges
- '''
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def outer_join_indexer_int16(ndarray[int16_t] left,
+ ndarray[int16_t] right):
cdef:
- Py_ssize_t i, j, k, nright, nleft, count
- int64_t lval, rval
+ Py_ssize_t i, j, nright, nleft, count
+ int16_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[int64_t] result
+ ndarray[int16_t] result
nleft = len(left)
nright = len(right)
@@ -3031,15 +7812,21 @@ def left_join_indexer_int64(ndarray[int64_t] left,
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ count = nright
+ elif nright == 0:
+ count = nleft
+ else:
+ while True:
+ if i == nleft:
+ count += nright - j
+ break
if j == nright:
count += nleft - i
break
lval = left[i]
rval = right[j]
-
if lval == rval:
count += 1
if i < nleft - 1:
@@ -3060,26 +7847,45 @@ def left_join_indexer_int64(ndarray[int64_t] left,
count += 1
i += 1
else:
+ count += 1
j += 1
- # do it again now that result size is known
-
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=np.int16)
+
+ # do it again, but populate the indexers / result
i = 0
j = 0
count = 0
- if nleft > 0:
- while i < nleft:
+ if nleft == 0:
+ for j in range(nright):
+ lindexer[j] = -1
+ rindexer[j] = j
+ result[j] = right[j]
+ elif nright == 0:
+ for i in range(nright):
+ lindexer[i] = i
+ rindexer[i] = -1
+ result[i] = left[i]
+ else:
+ while True:
+ if i == nleft:
+ while j < nright:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = right[j]
+ count += 1
+ j += 1
+ break
if j == nright:
while i < nleft:
lindexer[count] = i
rindexer[count] = -1
result[count] = left[i]
- i += 1
count += 1
+ i += 1
break
lval = left[i]
@@ -3107,24 +7913,27 @@ def left_join_indexer_int64(ndarray[int64_t] left,
elif lval < rval:
lindexer[count] = i
rindexer[count] = -1
- result[count] = left[i]
+ result[count] = lval
count += 1
i += 1
else:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = rval
+ count += 1
j += 1
return result, lindexer, rindexer
-
@cython.wraparound(False)
@cython.boundscheck(False)
-def outer_join_indexer_float64(ndarray[float64_t] left,
- ndarray[float64_t] right):
+def outer_join_indexer_int32(ndarray[int32_t] left,
+ ndarray[int32_t] right):
cdef:
Py_ssize_t i, j, nright, nleft, count
- float64_t lval, rval
+ int32_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[float64_t] result
+ ndarray[int32_t] result
nleft = len(left)
nright = len(right)
@@ -3172,7 +7981,7 @@ def outer_join_indexer_float64(ndarray[float64_t] left,
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=np.float64)
+ result = np.empty(count, dtype=np.int32)
# do it again, but populate the indexers / result
@@ -3247,13 +8056,13 @@ def outer_join_indexer_float64(ndarray[float64_t] left,
@cython.wraparound(False)
@cython.boundscheck(False)
-def outer_join_indexer_object(ndarray[object] left,
- ndarray[object] right):
+def outer_join_indexer_int64(ndarray[int64_t] left,
+ ndarray[int64_t] right):
cdef:
Py_ssize_t i, j, nright, nleft, count
- object lval, rval
+ int64_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[object] result
+ ndarray[int64_t] result
nleft = len(left)
nright = len(right)
@@ -3301,7 +8110,7 @@ def outer_join_indexer_object(ndarray[object] left,
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=object)
+ result = np.empty(count, dtype=np.int64)
# do it again, but populate the indexers / result
@@ -3374,15 +8183,19 @@ def outer_join_indexer_object(ndarray[object] left,
return result, lindexer, rindexer
+
@cython.wraparound(False)
@cython.boundscheck(False)
-def outer_join_indexer_int32(ndarray[int32_t] left,
- ndarray[int32_t] right):
+def inner_join_indexer_float64(ndarray[float64_t] left,
+ ndarray[float64_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
cdef:
- Py_ssize_t i, j, nright, nleft, count
- int32_t lval, rval
+ Py_ssize_t i, j, k, nright, nleft, count
+ float64_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[int32_t] result
+ ndarray[float64_t] result
nleft = len(left)
nright = len(right)
@@ -3390,17 +8203,11 @@ def outer_join_indexer_int32(ndarray[int32_t] left,
i = 0
j = 0
count = 0
- if nleft == 0:
- count = nright
- elif nright == 0:
- count = nleft
- else:
+ if nleft > 0 and nright > 0:
while True:
if i == nleft:
- count += nright - j
break
if j == nright:
- count += nleft - i
break
lval = left[i]
@@ -3422,57 +8229,32 @@ def outer_join_indexer_int32(ndarray[int32_t] left,
# end of the road
break
elif lval < rval:
- count += 1
i += 1
else:
- count += 1
j += 1
+ # do it again now that result size is known
+
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=np.int32)
-
- # do it again, but populate the indexers / result
+ result = np.empty(count, dtype=np.float64)
i = 0
j = 0
count = 0
- if nleft == 0:
- for j in range(nright):
- lindexer[j] = -1
- rindexer[j] = j
- result[j] = right[j]
- elif nright == 0:
- for i in range(nright):
- lindexer[i] = i
- rindexer[i] = -1
- result[i] = left[i]
- else:
+ if nleft > 0 and nright > 0:
while True:
if i == nleft:
- while j < nright:
- lindexer[count] = -1
- rindexer[count] = j
- result[count] = right[j]
- count += 1
- j += 1
break
if j == nright:
- while i < nleft:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = left[i]
- count += 1
- i += 1
break
lval = left[i]
rval = right[j]
-
if lval == rval:
lindexer[count] = i
rindexer[count] = j
- result[count] = lval
+ result[count] = rval
count += 1
if i < nleft - 1:
if j < nright - 1 and right[j + 1] == rval:
@@ -3489,29 +8271,24 @@ def outer_join_indexer_int32(ndarray[int32_t] left,
# end of the road
break
elif lval < rval:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = lval
- count += 1
i += 1
else:
- lindexer[count] = -1
- rindexer[count] = j
- result[count] = rval
- count += 1
j += 1
return result, lindexer, rindexer
@cython.wraparound(False)
@cython.boundscheck(False)
-def outer_join_indexer_int64(ndarray[int64_t] left,
- ndarray[int64_t] right):
+def inner_join_indexer_float32(ndarray[float32_t] left,
+ ndarray[float32_t] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
cdef:
- Py_ssize_t i, j, nright, nleft, count
- int64_t lval, rval
+ Py_ssize_t i, j, k, nright, nleft, count
+ float32_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[int64_t] result
+ ndarray[float32_t] result
nleft = len(left)
nright = len(right)
@@ -3519,17 +8296,11 @@ def outer_join_indexer_int64(ndarray[int64_t] left,
i = 0
j = 0
count = 0
- if nleft == 0:
- count = nright
- elif nright == 0:
- count = nleft
- else:
+ if nleft > 0 and nright > 0:
while True:
if i == nleft:
- count += nright - j
break
if j == nright:
- count += nleft - i
break
lval = left[i]
@@ -3551,57 +8322,32 @@ def outer_join_indexer_int64(ndarray[int64_t] left,
# end of the road
break
elif lval < rval:
- count += 1
i += 1
else:
- count += 1
j += 1
+ # do it again now that result size is known
+
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=np.int64)
-
- # do it again, but populate the indexers / result
+ result = np.empty(count, dtype=np.float32)
i = 0
j = 0
count = 0
- if nleft == 0:
- for j in range(nright):
- lindexer[j] = -1
- rindexer[j] = j
- result[j] = right[j]
- elif nright == 0:
- for i in range(nright):
- lindexer[i] = i
- rindexer[i] = -1
- result[i] = left[i]
- else:
+ if nleft > 0 and nright > 0:
while True:
if i == nleft:
- while j < nright:
- lindexer[count] = -1
- rindexer[count] = j
- result[count] = right[j]
- count += 1
- j += 1
break
if j == nright:
- while i < nleft:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = left[i]
- count += 1
- i += 1
break
lval = left[i]
rval = right[j]
-
if lval == rval:
lindexer[count] = i
rindexer[count] = j
- result[count] = lval
+ result[count] = rval
count += 1
if i < nleft - 1:
if j < nright - 1 and right[j + 1] == rval:
@@ -3618,33 +8364,117 @@ def outer_join_indexer_int64(ndarray[int64_t] left,
# end of the road
break
elif lval < rval:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = lval
+ i += 1
+ else:
+ j += 1
+
+ return result, lindexer, rindexer
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def inner_join_indexer_object(ndarray[object] left,
+ ndarray[object] right):
+ '''
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ '''
+ cdef:
+ Py_ssize_t i, j, k, nright, nleft, count
+ object lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[object] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0 and nright > 0:
+ while True:
+ if i == nleft:
+ break
+ if j == nright:
+ break
+
+ lval = left[i]
+ rval = right[j]
+ if lval == rval:
count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
i += 1
else:
- lindexer[count] = -1
+ j += 1
+
+ # do it again now that result size is known
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=object)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0 and nright > 0:
+ while True:
+ if i == nleft:
+ break
+ if j == nright:
+ break
+
+ lval = left[i]
+ rval = right[j]
+ if lval == rval:
+ lindexer[count] = i
rindexer[count] = j
result[count] = rval
count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ i += 1
+ else:
j += 1
return result, lindexer, rindexer
-
@cython.wraparound(False)
@cython.boundscheck(False)
-def inner_join_indexer_float64(ndarray[float64_t] left,
- ndarray[float64_t] right):
+def inner_join_indexer_int8(ndarray[int8_t] left,
+ ndarray[int8_t] right):
'''
Two-pass algorithm for monotonic indexes. Handles many-to-one merges
'''
cdef:
Py_ssize_t i, j, k, nright, nleft, count
- float64_t lval, rval
+ int8_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[float64_t] result
+ ndarray[int8_t] result
nleft = len(left)
nright = len(right)
@@ -3686,7 +8516,7 @@ def inner_join_indexer_float64(ndarray[float64_t] left,
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=np.float64)
+ result = np.empty(count, dtype=np.int8)
i = 0
j = 0
@@ -3728,16 +8558,16 @@ def inner_join_indexer_float64(ndarray[float64_t] left,
@cython.wraparound(False)
@cython.boundscheck(False)
-def inner_join_indexer_object(ndarray[object] left,
- ndarray[object] right):
+def inner_join_indexer_int16(ndarray[int16_t] left,
+ ndarray[int16_t] right):
'''
Two-pass algorithm for monotonic indexes. Handles many-to-one merges
'''
cdef:
Py_ssize_t i, j, k, nright, nleft, count
- object lval, rval
+ int16_t lval, rval
ndarray[int64_t] lindexer, rindexer
- ndarray[object] result
+ ndarray[int16_t] result
nleft = len(left)
nright = len(right)
@@ -3779,7 +8609,7 @@ def inner_join_indexer_object(ndarray[object] left,
lindexer = np.empty(count, dtype=np.int64)
rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype=object)
+ result = np.empty(count, dtype=np.int16)
i = 0
j = 0
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 41ac1b3f3480f..ea14245e10731 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -302,7 +302,7 @@ cdef double fINT64_MAX = <double> INT64_MAX
cdef double fINT64_MIN = <double> INT64_MIN
def maybe_convert_numeric(ndarray[object] values, set na_values,
- convert_empty=True):
+ convert_empty=True, coerce_numeric=False):
'''
Type inference function-- convert strings to numeric (potentially) and
convert to proper dtype array
@@ -346,17 +346,25 @@ def maybe_convert_numeric(ndarray[object] values, set na_values,
complexes[i] = val
seen_complex = 1
else:
- status = floatify(val, &fval)
- floats[i] = fval
- if not seen_float:
- if '.' in val or fval == INF or fval == NEGINF:
- seen_float = 1
- elif 'inf' in val: # special case to handle +/-inf
- seen_float = 1
- elif fval < fINT64_MAX and fval > fINT64_MIN:
- ints[i] = <int64_t> fval
- else:
- seen_float = 1
+ try:
+ status = floatify(val, &fval)
+ floats[i] = fval
+ if not seen_float:
+ if '.' in val or fval == INF or fval == NEGINF:
+ seen_float = 1
+ elif 'inf' in val: # special case to handle +/-inf
+ seen_float = 1
+ elif fval < fINT64_MAX and fval > fINT64_MIN:
+ ints[i] = <int64_t> fval
+ else:
+ seen_float = 1
+ except:
+ if not coerce_numeric:
+ raise
+
+ floats[i] = nan
+ seen_float = 1
+
if seen_complex:
return complexes
diff --git a/pandas/src/numpy.pxd b/pandas/src/numpy.pxd
index 45c2fc184a911..b005a716e7d5f 100644
--- a/pandas/src/numpy.pxd
+++ b/pandas/src/numpy.pxd
@@ -326,6 +326,7 @@ cdef extern from "numpy/arrayobject.h":
ctypedef unsigned long long npy_uint96
ctypedef unsigned long long npy_uint128
+ ctypedef float npy_float16
ctypedef float npy_float32
ctypedef double npy_float64
ctypedef long double npy_float80
@@ -735,6 +736,7 @@ ctypedef npy_uint64 uint64_t
#ctypedef npy_uint96 uint96_t
#ctypedef npy_uint128 uint128_t
+ctypedef npy_float16 float16_t
ctypedef npy_float32 float32_t
ctypedef npy_float64 float64_t
#ctypedef npy_float80 float80_t
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 5b1d6c31403cb..1017f9cd7c503 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -406,8 +406,8 @@ def test_2d_float32(self):
expected[[2, 4]] = np.nan
tm.assert_almost_equal(result, expected)
- # test with float64 out buffer
- out = np.empty((len(indexer), arr.shape[1]), dtype='f8')
+ #### this now accepts a float32! # test with float64 out buffer
+ out = np.empty((len(indexer), arr.shape[1]), dtype='float32')
com.take_2d(arr, indexer, out=out) # it works!
# axis=1
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index a14d6027361cc..0e3134d940c99 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -140,7 +140,8 @@ def test_to_string_repr_unicode(self):
line = line.decode(get_option("display.encoding"))
except:
pass
- self.assert_(len(line) == line_len)
+ if not line.startswith('Dtype:'):
+ self.assert_(len(line) == line_len)
# it works even if sys.stdin in None
_stdin= sys.stdin
@@ -1056,6 +1057,8 @@ def test_float_trim_zeros(self):
2.03954217305e+10, 5.59897817305e+10]
skip = True
for line in repr(DataFrame({'A': vals})).split('\n'):
+ if line.startswith('Dtype:'):
+ continue
if _three_digit_exp():
self.assert_(('+010' in line) or skip)
else:
@@ -1101,7 +1104,7 @@ def test_to_string(self):
format = '%.4f'.__mod__
result = self.ts.to_string(float_format=format)
result = [x.split()[1] for x in result.split('\n')]
- expected = [format(x) for x in self.ts]
+ expected = [format(x) for x in self.ts] + [u'float64']
self.assertEqual(result, expected)
# empty string
@@ -1116,7 +1119,7 @@ def test_to_string(self):
cp.name = 'foo'
result = cp.to_string(length=True, name=True)
last_line = result.split('\n')[-1].strip()
- self.assertEqual(last_line, "Freq: B, Name: foo, Length: %d" % len(cp))
+ self.assertEqual(last_line, "Freq: B, Name: foo, Length: %d, Dtype: float64" % len(cp))
def test_freq_name_separation(self):
s = Series(np.random.randn(10),
@@ -1131,7 +1134,8 @@ def test_to_string_mixed(self):
expected = (u'0 foo\n'
u'1 NaN\n'
u'2 -1.23\n'
- u'3 4.56')
+ u'3 4.56\n'
+ u'Dtype: object')
self.assertEqual(result, expected)
# but don't count NAs as floats
@@ -1140,7 +1144,8 @@ def test_to_string_mixed(self):
expected = (u'0 foo\n'
'1 NaN\n'
'2 bar\n'
- '3 baz')
+ '3 baz\n'
+ u'Dtype: object')
self.assertEqual(result, expected)
s = Series(['foo', 5, 'bar', 'baz'])
@@ -1148,7 +1153,8 @@ def test_to_string_mixed(self):
expected = (u'0 foo\n'
'1 5\n'
'2 bar\n'
- '3 baz')
+ '3 baz\n'
+ u'Dtype: object')
self.assertEqual(result, expected)
def test_to_string_float_na_spacing(self):
@@ -1160,7 +1166,8 @@ def test_to_string_float_na_spacing(self):
'1 1.5678\n'
'2 NaN\n'
'3 -3.0000\n'
- '4 NaN')
+ '4 NaN\n'
+ u'Dtype: float64')
self.assertEqual(result, expected)
def test_unicode_name_in_footer(self):
@@ -1172,6 +1179,8 @@ def test_float_trim_zeros(self):
vals = [2.08430917305e+10, 3.52205017305e+10, 2.30674817305e+10,
2.03954217305e+10, 5.59897817305e+10]
for line in repr(Series(vals)).split('\n'):
+ if line.startswith('Dtype:'):
+ continue
if _three_digit_exp():
self.assert_('+010' in line)
else:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 09747ba3f09f0..03fdd53ce19af 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -46,7 +46,38 @@ def _skip_if_no_scipy():
# DataFrame test cases
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-
+MIXED_FLOAT_DTYPES = ['float16','float32','float64']
+MIXED_INT_DTYPES = ['uint8','uint16','uint32','uint64','int8','int16','int32','int64']
+
+def _check_mixed_float(df, dtype = None):
+ dtypes = dict(A = 'float32', B = 'float32', C = 'float16', D = 'float64')
+ if isinstance(dtype, basestring):
+ dtypes = dict([ (k,dtype) for k, v in dtypes.items() ])
+ elif isinstance(dtype, dict):
+ dtypes.update(dtype)
+ if dtypes.get('A'):
+ assert(df.dtypes['A'] == dtypes['A'])
+ if dtypes.get('B'):
+ assert(df.dtypes['B'] == dtypes['B'])
+ if dtypes.get('C'):
+ assert(df.dtypes['C'] == dtypes['C'])
+ if dtypes.get('D'):
+ assert(df.dtypes['D'] == dtypes['D'])
+
+def _check_mixed_int(df, dtype = None):
+ dtypes = dict(A = 'int32', B = 'uint64', C = 'uint8', D = 'int64')
+ if isinstance(dtype, basestring):
+ dtypes = dict([ (k,dtype) for k, v in dtypes.items() ])
+ elif isinstance(dtype, dict):
+ dtypes.update(dtype)
+ if dtypes.get('A'):
+ assert(df.dtypes['A'] == dtypes['A'])
+ if dtypes.get('B'):
+ assert(df.dtypes['B'] == dtypes['B'])
+ if dtypes.get('C'):
+ assert(df.dtypes['C'] == dtypes['C'])
+ if dtypes.get('D'):
+ assert(df.dtypes['D'] == dtypes['D'])
class CheckIndexing(object):
@@ -121,6 +152,7 @@ def test_getitem_list(self):
self.assertEqual(result.columns.name, 'sth')
def test_setitem_list(self):
+
self.frame['E'] = 'foo'
data = self.frame[['A', 'B']]
self.frame[['B', 'A']] = data
@@ -128,11 +160,11 @@ def test_setitem_list(self):
assert_series_equal(self.frame['B'], data['A'])
assert_series_equal(self.frame['A'], data['B'])
- df = DataFrame(0, range(3), ['tt1', 'tt2'])
+ df = DataFrame(0, range(3), ['tt1', 'tt2'], dtype=np.int_)
df.ix[1, ['tt1', 'tt2']] = [1, 2]
result = df.ix[1, ['tt1', 'tt2']]
- expected = Series([1, 2], df.columns)
+ expected = Series([1, 2], df.columns, dtype=np.int_)
assert_series_equal(result, expected)
df['tt1'] = df['tt2'] = '0'
@@ -171,14 +203,43 @@ def test_getitem_boolean(self):
self.assertRaises(ValueError, self.tsframe.__getitem__, self.tsframe)
- # test df[df >0] works
- bif = self.tsframe[self.tsframe > 0]
- bifw = DataFrame(np.where(self.tsframe > 0, self.tsframe, np.nan),
- index=self.tsframe.index, columns=self.tsframe.columns)
- self.assert_(isinstance(bif, DataFrame))
- self.assert_(bif.shape == self.tsframe.shape)
- assert_frame_equal(bif, bifw)
+ # test df[df > 0]
+ for df in [ self.tsframe, self.mixed_frame, self.mixed_float, self.mixed_int ]:
+
+ data = df._get_numeric_data()
+ bif = df[df > 0]
+ bifw = DataFrame(dict([ (c,np.where(data[c] > 0, data[c], np.nan)) for c in data.columns ]),
+ index=data.index, columns=data.columns)
+
+ # add back other columns to compare
+ for c in df.columns:
+ if c not in bifw:
+ bifw[c] = df[c]
+ bifw = bifw.reindex(columns = df.columns)
+
+ assert_frame_equal(bif, bifw, check_dtype=False)
+ for c in df.columns:
+ if bif[c].dtype != bifw[c].dtype:
+ self.assert_(bif[c].dtype == df[c].dtype)
+
+ def test_getitem_boolean_casting(self):
+
+ #### this currently disabled ###
+
+ # don't upcast if we don't need to
+ df = self.tsframe.copy()
+ df['E'] = 1
+ df['E'] = df['E'].astype('int32')
+ df['F'] = 1
+ df['F'] = df['F'].astype('int64')
+ casted = df[df>0]
+ result = casted.get_dtype_counts()
+ #expected = Series({'float64': 4, 'int32' : 1, 'int64' : 1})
+ expected = Series({'float64': 6 })
+ assert_series_equal(result, expected)
+
+
def test_getitem_boolean_list(self):
df = DataFrame(np.arange(12).reshape(3, 4))
@@ -194,9 +255,9 @@ def _checkit(lst):
def test_getitem_boolean_iadd(self):
arr = randn(5, 5)
- df = DataFrame(arr.copy())
- df[df < 0] += 1
+ df = DataFrame(arr.copy(), columns = ['A','B','C','D','E'])
+ df[df < 0] += 1
arr[arr < 0] += 1
assert_almost_equal(df.values, arr)
@@ -341,7 +402,7 @@ def test_setitem_cast(self):
# #669, should not cast?
self.frame['B'] = 0
- self.assert_(self.frame['B'].dtype == np.float64)
+ self.assert_(self.frame['B'].dtype == np.float_)
# cast if pass array of course
self.frame['B'] = np.arange(len(self.frame))
@@ -349,18 +410,18 @@ def test_setitem_cast(self):
self.frame['foo'] = 'bar'
self.frame['foo'] = 0
- self.assert_(self.frame['foo'].dtype == np.int64)
+ self.assert_(self.frame['foo'].dtype == np.int_)
self.frame['foo'] = 'bar'
self.frame['foo'] = 2.5
- self.assert_(self.frame['foo'].dtype == np.float64)
+ self.assert_(self.frame['foo'].dtype == np.float_)
self.frame['something'] = 0
- self.assert_(self.frame['something'].dtype == np.int64)
+ self.assert_(self.frame['something'].dtype == np.int_)
self.frame['something'] = 2
- self.assert_(self.frame['something'].dtype == np.int64)
+ self.assert_(self.frame['something'].dtype == np.int_)
self.frame['something'] = 2.5
- self.assert_(self.frame['something'].dtype == np.float64)
+ self.assert_(self.frame['something'].dtype == np.float_)
def test_setitem_boolean_column(self):
expected = self.frame.copy()
@@ -395,7 +456,7 @@ def test_setitem_corner(self):
self.assertEqual(dm.values.dtype, np.object_)
dm['C'] = 1
- self.assertEqual(dm['C'].dtype, np.int64)
+ self.assertEqual(dm['C'].dtype, np.int_)
# set existing column
dm['A'] = 'bar'
@@ -1114,10 +1175,6 @@ def test_setitem_single_column_mixed_datetime(self):
self.assertRaises(
Exception, df.ix.__setitem__, ('d', 'timestamp'), [nan])
- # prior to 0.10.1 this failed
- # self.assertRaises(TypeError, df.ix.__setitem__, ('c','timestamp'),
- # nan)
-
def test_setitem_frame(self):
piece = self.frame.ix[:2, ['A', 'B']]
self.frame.ix[-2:, ['A', 'B']] = piece.values
@@ -1562,10 +1619,30 @@ def setUp(self):
self.frame = _frame.copy()
self.frame2 = _frame2.copy()
- self.intframe = _intframe.copy()
+
+ # force these all to int64 to avoid platform testing issues
+ self.intframe = DataFrame(dict([ (c,s) for c,s in _intframe.iteritems() ]), dtype = np.int64)
self.tsframe = _tsframe.copy()
self.mixed_frame = _mixed_frame.copy()
-
+ self.mixed_float = DataFrame({ 'A': _frame['A'].copy().astype('float32'),
+ 'B': _frame['B'].copy().astype('float32'),
+ 'C': _frame['C'].copy().astype('float16'),
+ 'D': _frame['D'].copy().astype('float64') })
+ self.mixed_float2 = DataFrame({ 'A': _frame2['A'].copy().astype('float32'),
+ 'B': _frame2['B'].copy().astype('float32'),
+ 'C': _frame2['C'].copy().astype('float16'),
+ 'D': _frame2['D'].copy().astype('float64') })
+ self.mixed_int = DataFrame({ 'A': _intframe['A'].copy().astype('int32'),
+ 'B': np.ones(len(_intframe['B']),dtype='uint64'),
+ 'C': _intframe['C'].copy().astype('uint8'),
+ 'D': _intframe['D'].copy().astype('int64') })
+ self.all_mixed = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'float32' : np.array([1.]*10,dtype='float32'),
+ 'int32' : np.array([1]*10,dtype='int32'),
+ }, index=np.arange(10))
+ #self.all_mixed = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'float32' : np.array([1.]*10,dtype='float32'),
+ # 'int32' : np.array([1]*10,dtype='int32'), 'timestamp' : Timestamp('20010101'),
+ # }, index=np.arange(10))
+
self.ts1 = tm.makeTimeSeries()
self.ts2 = tm.makeTimeSeries()[5:]
self.ts3 = tm.makeTimeSeries()[-5:]
@@ -1806,6 +1883,44 @@ def test_constructor_dtype_list_data(self):
self.assert_(df.ix[1, 0] is None)
self.assert_(df.ix[0, 1] == '2')
+ def test_constructor_mixed_dtypes(self):
+
+ def _make_mixed_dtypes_df(typ, ad = None):
+
+ if typ == 'int':
+ dtypes = MIXED_INT_DTYPES
+ arrays = [ np.array(np.random.rand(10), dtype = d) for d in dtypes ]
+ elif typ == 'float':
+ dtypes = MIXED_FLOAT_DTYPES
+ arrays = [ np.array(np.random.randint(10, size=10), dtype = d) for d in dtypes ]
+
+ zipper = zip(dtypes,arrays)
+ for d,a in zipper:
+ assert(a.dtype == d)
+ if ad is None:
+ ad = dict()
+ ad.update(dict([ (d,a) for d,a in zipper ]))
+ return DataFrame(ad)
+
+ def _check_mixed_dtypes(df, dtypes = None):
+ if dtypes is None:
+ dtypes = MIXED_FLOAT_DTYPES + MIXED_INT_DTYPES
+ for d in dtypes:
+ if d in df:
+ assert(df.dtypes[d] == d)
+
+ # mixed floating and integer coexinst in the same frame
+ df = _make_mixed_dtypes_df('float')
+ _check_mixed_dtypes(df)
+
+ # add lots of types
+ df = _make_mixed_dtypes_df('float', dict(A = 1, B = 'foo', C = 'bar'))
+ _check_mixed_dtypes(df)
+
+ # GH 622
+ df = _make_mixed_dtypes_df('int')
+ _check_mixed_dtypes(df)
+
def test_constructor_rec(self):
rec = self.frame.to_records(index=False)
@@ -1975,7 +2090,7 @@ def test_constructor_dict_of_tuples(self):
result = DataFrame(data)
expected = DataFrame(dict((k, list(v)) for k, v in data.iteritems()))
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_dtype=False)
def test_constructor_ndarray(self):
mat = np.zeros((2, 3), dtype=float)
@@ -1988,7 +2103,7 @@ def test_constructor_ndarray(self):
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
- index=[1, 2], dtype=int)
+ index=[1, 2], dtype=np.int64)
self.assert_(frame.values.dtype == np.int64)
# 1-D input
@@ -2040,7 +2155,7 @@ def test_constructor_maskedarray(self):
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
- index=[1, 2], dtype=int)
+ index=[1, 2], dtype=np.int64)
self.assert_(frame.values.dtype == np.int64)
# Check non-masked values
@@ -2098,7 +2213,7 @@ def test_constructor_maskedarray_nonfloat(self):
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
- index=[1, 2], dtype=float)
+ index=[1, 2], dtype=np.float64)
self.assert_(frame.values.dtype == np.float64)
# Check non-masked values
@@ -2174,9 +2289,9 @@ def test_constructor_scalar_inference(self):
'float': 3., 'complex': 4j, 'object': 'foo'}
df = DataFrame(data, index=np.arange(10))
- self.assert_(df['int'].dtype == np.int64)
+ self.assert_(df['int'].dtype == np.int_)
self.assert_(df['bool'].dtype == np.bool_)
- self.assert_(df['float'].dtype == np.float64)
+ self.assert_(df['float'].dtype == np.float_)
self.assert_(df['complex'].dtype == np.complex128)
self.assert_(df['object'].dtype == np.object_)
@@ -2192,7 +2307,7 @@ def test_constructor_DataFrame(self):
df = DataFrame(self.frame)
assert_frame_equal(df, self.frame)
- df_casted = DataFrame(self.frame, dtype=int)
+ df_casted = DataFrame(self.frame, dtype=np.int64)
self.assert_(df_casted.values.dtype == np.int64)
def test_constructor_more(self):
@@ -2229,7 +2344,7 @@ def test_constructor_more(self):
# int cast
dm = DataFrame({'A': np.ones(10, dtype=int),
- 'B': np.ones(10, dtype=float)},
+ 'B': np.ones(10, dtype=np.float64)},
index=np.arange(10))
self.assertEqual(len(dm.columns), 2)
@@ -2339,7 +2454,7 @@ def test_constructor_scalar(self):
idx = Index(range(3))
df = DataFrame({"a": 0}, index=idx)
expected = DataFrame({"a": [0, 0, 0]}, index=idx)
- assert_frame_equal(df, expected)
+ assert_frame_equal(df, expected, check_dtype=False)
def test_constructor_Series_copy_bug(self):
df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A'])
@@ -2523,6 +2638,12 @@ def test_astype(self):
columns=self.frame.columns)
assert_frame_equal(casted, expected)
+ casted = self.frame.astype(np.int32)
+ expected = DataFrame(self.frame.values.astype(np.int32),
+ index=self.frame.index,
+ columns=self.frame.columns)
+ assert_frame_equal(casted, expected)
+
self.frame['foo'] = '5'
casted = self.frame.astype(int)
expected = DataFrame(self.frame.values.astype(int),
@@ -2530,6 +2651,81 @@ def test_astype(self):
columns=self.frame.columns)
assert_frame_equal(casted, expected)
+ # mixed casting
+ def _check_cast(df, v):
+ self.assert_(list(set([ s.dtype.name for _, s in df.iteritems() ]))[0] == v)
+
+ mn = self.all_mixed._get_numeric_data().copy()
+ mn['little_float'] = np.array(12345.,dtype='float16')
+ mn['big_float'] = np.array(123456789101112.,dtype='float64')
+
+ casted = mn.astype('float64')
+ _check_cast(casted, 'float64')
+
+ casted = mn.astype('int64')
+ _check_cast(casted, 'int64')
+
+ casted = self.mixed_float.reindex(columns = ['A','B']).astype('float32')
+ _check_cast(casted, 'float32')
+
+ casted = mn.reindex(columns = ['little_float']).astype('float16')
+ _check_cast(casted, 'float16')
+
+ casted = self.mixed_float.reindex(columns = ['A','B']).astype('float16')
+ _check_cast(casted, 'float16')
+
+ casted = mn.astype('float32')
+ _check_cast(casted, 'float32')
+
+ casted = mn.astype('int32')
+ _check_cast(casted, 'int32')
+
+ # to object
+ casted = mn.astype('O')
+ _check_cast(casted, 'object')
+
+ def test_astype_with_exclude_string(self):
+ df = self.frame.copy()
+ expected = self.frame.astype(int)
+ df['string'] = 'foo'
+ casted = df.astype(int, raise_on_error = False)
+
+ expected['string'] = 'foo'
+ assert_frame_equal(casted, expected)
+
+ df = self.frame.copy()
+ expected = self.frame.astype(np.int32)
+ df['string'] = 'foo'
+ casted = df.astype(np.int32, raise_on_error = False)
+
+ expected['string'] = 'foo'
+ assert_frame_equal(casted, expected)
+
+ def test_astype_with_view(self):
+
+ tf = self.mixed_float.reindex(columns = ['A','B','C'])
+ self.assertRaises(TypeError, self.frame.astype, np.int32, copy = False)
+
+ self.assertRaises(TypeError, tf, np.int32, copy = False)
+
+ self.assertRaises(TypeError, tf, np.int64, copy = False)
+ casted = tf.astype(np.int64)
+
+ self.assertRaises(TypeError, tf, np.float32, copy = False)
+ casted = tf.astype(np.float32)
+
+ # this is the only real reason to do it this way
+ tf = np.round(self.frame).astype(np.int32)
+ casted = tf.astype(np.float32, copy = False)
+ #self.assert_(casted.values.data == tf.values.data)
+
+ tf = self.frame.astype(np.float64)
+ casted = tf.astype(np.int64, copy = False)
+ #self.assert_(casted.values.data == tf.values.data)
+
+ # can't view to an object array
+ self.assertRaises(Exception, self.frame.astype, 'O', copy = False)
+
def test_astype_cast_nan_int(self):
df = DataFrame(data={"Values": [1.0, 2.0, 3.0, np.nan]})
self.assertRaises(ValueError, df.astype, np.int64)
@@ -2634,7 +2830,7 @@ def _check_all_orients(df, dtype=None):
# dtypes
_check_all_orients(DataFrame(biggie, dtype=np.float64),
dtype=np.float64)
- _check_all_orients(DataFrame(biggie, dtype=np.int), dtype=np.int)
+ _check_all_orients(DataFrame(biggie, dtype=np.int64), dtype=np.int64)
_check_all_orients(DataFrame(biggie, dtype='<U3'), dtype='<U3')
# empty
@@ -2760,17 +2956,20 @@ def test_from_records_nones(self):
self.assert_(np.isnan(df['c'][0]))
def test_from_records_iterator(self):
- arr = np.array([(1.0, 2), (3.0, 4), (5., 6), (7., 8)],
- dtype=[('x', float), ('y', int)])
+ arr = np.array([(1.0, 1.0, 2, 2), (3.0, 3.0, 4, 4), (5., 5., 6, 6), (7., 7., 8, 8)],
+ dtype=[('x', np.float64), ('u', np.float32), ('y', np.int64), ('z', np.int32) ])
df = DataFrame.from_records(iter(arr), nrows=2)
- xp = DataFrame({'x': np.array([1.0, 3.0], dtype=float),
- 'y': np.array([2, 4], dtype=int)})
- assert_frame_equal(df, xp)
+ xp = DataFrame({'x': np.array([1.0, 3.0], dtype=np.float64),
+ 'u': np.array([1.0, 3.0], dtype=np.float32),
+ 'y': np.array([2, 4], dtype=np.int64),
+ 'z': np.array([2, 4], dtype=np.int32)})
+ assert_frame_equal(df.reindex_like(xp), xp)
+ # no dtypes specified here, so just compare with the default
arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)]
df = DataFrame.from_records(iter(arr), columns=['x', 'y'],
nrows=2)
- assert_frame_equal(df, xp)
+ assert_frame_equal(df, xp.reindex(columns=['x','y']), check_dtype=False)
def test_from_records_columns_not_modified(self):
tuples = [(1, 2, 3),
@@ -2867,30 +3066,60 @@ def test_join_str_datetime(self):
self.assert_(len(tst.columns) == 3)
def test_from_records_sequencelike(self):
- df = DataFrame({'A': np.random.randn(6),
- 'B': np.arange(6),
- 'C': ['foo'] * 6,
- 'D': np.array([True, False] * 3, dtype=bool)})
-
- tuples = [tuple(x) for x in df.values]
- lists = [list(x) for x in tuples]
- asdict = dict((x, y) for x, y in df.iteritems())
-
- result = DataFrame.from_records(tuples, columns=df.columns)
- result2 = DataFrame.from_records(lists, columns=df.columns)
- result3 = DataFrame.from_records(asdict, columns=df.columns)
-
- assert_frame_equal(result, df)
+ df = DataFrame({'A' : np.array(np.random.randn(6), dtype = np.float64),
+ 'A1': np.array(np.random.randn(6), dtype = np.float64),
+ 'B' : np.array(np.arange(6), dtype = np.int64),
+ 'C' : ['foo'] * 6,
+ 'D' : np.array([True, False] * 3, dtype=bool),
+ 'E' : np.array(np.random.randn(6), dtype = np.float32),
+ 'E1': np.array(np.random.randn(6), dtype = np.float32),
+ 'F' : np.array(np.arange(6), dtype = np.int32) })
+
+ # this is actually tricky to create the recordlike arrays and have the dtypes be intact
+ blocks = df.blocks
+ tuples = []
+ columns = []
+ dtypes = []
+ for dtype, b in blocks.iteritems():
+ columns.extend(b.columns)
+ dtypes.extend([ (c,np.dtype(dtype).descr[0][1]) for c in b.columns ])
+ for i in xrange(len(df.index)):
+ tup = []
+ for _, b in blocks.iteritems():
+ tup.extend(b.irow(i).values)
+ tuples.append(tuple(tup))
+
+ recarray = np.array(tuples, dtype=dtypes).view(np.recarray)
+ recarray2 = df.to_records()
+ lists = [list(x) for x in tuples]
+
+ # tuples (lose the dtype info)
+ result = DataFrame.from_records(tuples, columns=columns).reindex(columns=df.columns)
+
+ # created recarray and with to_records recarray (have dtype info)
+ result2 = DataFrame.from_records(recarray, columns=columns).reindex(columns=df.columns)
+ result3 = DataFrame.from_records(recarray2, columns=columns).reindex(columns=df.columns)
+
+ # list of tupels (no dtype info)
+ result4 = DataFrame.from_records(lists, columns=columns).reindex(columns=df.columns)
+
+ assert_frame_equal(result, df, check_dtype=False)
assert_frame_equal(result2, df)
assert_frame_equal(result3, df)
+ assert_frame_equal(result4, df, check_dtype=False)
+ # tuples is in the order of the columns
result = DataFrame.from_records(tuples)
- self.assert_(np.array_equal(result.columns, range(4)))
+ self.assert_(np.array_equal(result.columns, range(8)))
- # test exclude parameter
- result = DataFrame.from_records(tuples, exclude=[0, 1, 3])
- result.columns = ['C']
- assert_frame_equal(result, df[['C']])
+ # test exclude parameter & we are casting the results here (as we don't have dtype info to recover)
+ columns_to_test = [ columns.index('C'), columns.index('E1') ]
+
+ exclude = list(set(xrange(8))-set(columns_to_test))
+ result = DataFrame.from_records(tuples, exclude=exclude)
+ result.columns = [ columns[i] for i in sorted(columns_to_test) ]
+ assert_series_equal(result['C'], df['C'])
+ assert_series_equal(result['E1'], df['E1'].astype('float64'))
# empty case
result = DataFrame.from_records([], columns=['foo', 'bar', 'baz'])
@@ -2901,6 +3130,35 @@ def test_from_records_sequencelike(self):
self.assertEqual(len(result), 0)
self.assertEqual(len(result.columns), 0)
+ def test_from_records_dictlike(self):
+
+ # test the dict methods
+ df = DataFrame({'A' : np.array(np.random.randn(6), dtype = np.float64),
+ 'A1': np.array(np.random.randn(6), dtype = np.float64),
+ 'B' : np.array(np.arange(6), dtype = np.int64),
+ 'C' : ['foo'] * 6,
+ 'D' : np.array([True, False] * 3, dtype=bool),
+ 'E' : np.array(np.random.randn(6), dtype = np.float32),
+ 'E1': np.array(np.random.randn(6), dtype = np.float32),
+ 'F' : np.array(np.arange(6), dtype = np.int32) })
+
+ # columns is in a different order here than the actual items iterated from the dict
+ columns = []
+ for dtype, b in df.blocks.iteritems():
+ columns.extend(b.columns)
+
+ asdict = dict((x, y) for x, y in df.iteritems())
+ asdict2 = dict((x, y.values) for x, y in df.iteritems())
+
+ # dict of series & dict of ndarrays (have dtype info)
+ results = []
+ results.append(DataFrame.from_records(asdict).reindex(columns=df.columns))
+ results.append(DataFrame.from_records(asdict, columns=columns).reindex(columns=df.columns))
+ results.append(DataFrame.from_records(asdict2, columns=columns).reindex(columns=df.columns))
+
+ for r in results:
+ assert_frame_equal(r, df)
+
def test_from_records_with_index_data(self):
df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
@@ -3127,6 +3385,22 @@ def test_insert(self):
self.assert_(np.array_equal(df.columns, ['foo', 'c', 'bar', 'b', 'a']))
assert_almost_equal(df['c'], df['bar'])
+ # diff dtype
+
+ # new item
+ df['x'] = df['a'].astype('float32')
+ result = Series(dict(float64 = 5, float32 = 1))
+ self.assert_((df.get_dtype_counts() == result).all() == True)
+
+ # replacing current (in different block)
+ df['a'] = df['a'].astype('float32')
+ result = Series(dict(float64 = 4, float32 = 2))
+ self.assert_((df.get_dtype_counts() == result).all() == True)
+
+ df['y'] = df['a'].astype('int32')
+ result = Series(dict(float64 = 4, float32 = 2, int32 = 1))
+ self.assert_((df.get_dtype_counts() == result).all() == True)
+
self.assertRaises(Exception, df.insert, 1, 'a', df['b'])
self.assertRaises(Exception, df.insert, 1, 'c', df['b'])
@@ -3285,10 +3559,13 @@ def _check_unary_op(op):
_check_unary_op(operator.neg)
def test_logical_typeerror(self):
- self.assertRaises(TypeError, self.frame.__eq__, 'foo')
- self.assertRaises(TypeError, self.frame.__lt__, 'foo')
- self.assertRaises(TypeError, self.frame.__gt__, 'foo')
- self.assertRaises(TypeError, self.frame.__ne__, 'foo')
+ if py3compat.PY3:
+ pass
+ else:
+ self.assertRaises(TypeError, self.frame.__eq__, 'foo')
+ self.assertRaises(TypeError, self.frame.__lt__, 'foo')
+ self.assertRaises(TypeError, self.frame.__gt__, 'foo')
+ self.assertRaises(TypeError, self.frame.__ne__, 'foo')
def test_constructor_lists_to_object_dtype(self):
# from #1074
@@ -3339,6 +3616,24 @@ def test_arith_flex_frame(self):
exp = f(self.frame, 2 * self.frame)
assert_frame_equal(result, exp)
+ # vs mix float
+ result = getattr(self.mixed_float, op)(2 * self.mixed_float)
+ exp = f(self.mixed_float, 2 * self.mixed_float)
+ assert_frame_equal(result, exp)
+ _check_mixed_float(result)
+
+ # vs mix int
+ if op in ['add','sub','mul']:
+ result = getattr(self.mixed_int, op)(2 + self.mixed_int)
+ exp = f(self.mixed_int, 2 + self.mixed_int)
+
+ # overflow in the uint
+ dtype = None
+ if op in ['sub']:
+ dtype = dict(B = 'object')
+ assert_frame_equal(result, exp)
+ _check_mixed_int(result, dtype = dtype)
+
# res_add = self.frame.add(self.frame)
# res_sub = self.frame.sub(self.frame)
# res_mul = self.frame.mul(self.frame)
@@ -3617,6 +3912,22 @@ def test_combineFrame(self):
assert_frame_equal(reverse + self.frame, self.frame * 2)
+ # mix vs float64, upcast
+ added = self.frame + self.mixed_float
+ _check_mixed_float(added, dtype = 'float64')
+ added = self.mixed_float + self.frame
+ _check_mixed_float(added, dtype = 'float64')
+
+ # mix vs mix
+ added = self.mixed_float + self.mixed_float2
+ _check_mixed_float(added)
+ added = self.mixed_float2 + self.mixed_float
+ _check_mixed_float(added)
+
+ # with int
+ added = self.frame + self.mixed_int
+ _check_mixed_float(added, dtype = 'float64')
+
def test_combineSeries(self):
# Series
@@ -3637,6 +3948,20 @@ def test_combineSeries(self):
self.assert_('E' in larger_added)
self.assert_(np.isnan(larger_added['E']).all())
+ # vs mix (upcast) as needed
+ added = self.mixed_float + series
+ _check_mixed_float(added, dtype = 'float64')
+ added = self.mixed_float + series.astype('float32')
+ _check_mixed_float(added, dtype = dict(C = 'float32'))
+ added = self.mixed_float + series.astype('float16')
+ _check_mixed_float(added)
+
+ # vs int
+ added = self.mixed_int + (100*series).astype('int64')
+ _check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C = 'int64', D = 'int64'))
+ added = self.mixed_int + (100*series).astype('int32')
+ _check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C = 'int32', D = 'int64'))
+
# TimeSeries
import sys
@@ -3678,6 +4003,12 @@ def test_combineFunc(self):
result = self.frame * 2
self.assert_(np.array_equal(result.values, self.frame.values * 2))
+ # vs mix
+ result = self.mixed_float * 2
+ for c, s in result.iteritems():
+ self.assert_(np.array_equal(s.values, self.mixed_float[c].values * 2))
+ _check_mixed_float(result)
+
result = self.empty * 2
self.assert_(result.index is self.empty.index)
self.assertEqual(len(result.columns), 0)
@@ -4075,11 +4406,40 @@ def test_dtypes(self):
assert_series_equal(result, expected)
def test_convert_objects(self):
+
oops = self.mixed_frame.T.T
converted = oops.convert_objects()
assert_frame_equal(converted, self.mixed_frame)
self.assert_(converted['A'].dtype == np.float64)
+ # force numeric conversion
+ self.mixed_frame['H'] = '1.'
+ self.mixed_frame['I'] = '1'
+
+ # add in some items that will be nan
+ l = len(self.mixed_frame)
+ self.mixed_frame['J'] = '1.'
+ self.mixed_frame['K'] = '1'
+ self.mixed_frame.ix[0:5,['J','K']] = 'garbled'
+ converted = self.mixed_frame.convert_objects(convert_numeric=True)
+ self.assert_(converted['H'].dtype == 'float64')
+ self.assert_(converted['I'].dtype == 'int64')
+ self.assert_(converted['J'].dtype == 'float64')
+ self.assert_(converted['K'].dtype == 'float64')
+ self.assert_(len(converted['J'].dropna()) == l-5)
+ self.assert_(len(converted['K'].dropna()) == l-5)
+
+ # via astype
+ converted = self.mixed_frame.copy()
+ converted['H'] = converted['H'].astype('float64')
+ converted['I'] = converted['I'].astype('int64')
+ self.assert_(converted['H'].dtype == 'float64')
+ self.assert_(converted['I'].dtype == 'int64')
+
+ # via astype, but errors
+ converted = self.mixed_frame.copy()
+ self.assertRaises(Exception, converted['H'].astype, 'int32')
+
def test_convert_objects_no_conversion(self):
mixed1 = DataFrame(
{'a': [1, 2, 3], 'b': [4.0, 5, 6], 'c': ['x', 'y', 'z']})
@@ -4198,6 +4558,11 @@ def test_as_matrix_duplicates(self):
self.assertTrue(np.array_equal(result, expected))
+ def test_as_blocks(self):
+ frame = self.mixed_float
+ mat = frame.blocks
+ self.assert_(set([ x.name for x in frame.dtypes.values ]) == set(mat.keys()))
+
def test_values(self):
self.frame.values[:, 0] = 5.
self.assert_((self.frame.values[:, 0] == 5).all())
@@ -4664,12 +5029,27 @@ def test_fillna(self):
# mixed type
self.mixed_frame['foo'][5:20] = nan
self.mixed_frame['A'][-10:] = nan
-
result = self.mixed_frame.fillna(value=0)
+ result = self.mixed_frame.fillna(method='pad')
self.assertRaises(ValueError, self.tsframe.fillna)
self.assertRaises(ValueError, self.tsframe.fillna, 5, method='ffill')
+ # mixed numeric (but no float16)
+ mf = self.mixed_float.reindex(columns=['A','B','D'])
+ mf['A'][-10:] = nan
+ result = mf.fillna(value=0)
+ _check_mixed_float(result, dtype = dict(C = None))
+
+ result = mf.fillna(method='pad')
+ _check_mixed_float(result, dtype = dict(C = None))
+
+ # empty frame (GH #2778)
+ df = DataFrame(columns=['x'])
+ for m in ['pad','backfill']:
+ df.x.fillna(method=m,inplace=1)
+ df.x.fillna(method=m)
+
def test_ffill(self):
self.tsframe['A'][:5] = nan
self.tsframe['A'][-5:] = nan
@@ -4845,6 +5225,27 @@ def test_replace_interpolate(self):
expected = self.tsframe.fillna(method='bfill')
assert_frame_equal(result, expected)
+ def test_replace_for_new_dtypes(self):
+
+ # dtypes
+ tsframe = self.tsframe.copy().astype(np.float32)
+ tsframe['A'][:5] = nan
+ tsframe['A'][-5:] = nan
+
+ zero_filled = tsframe.replace(nan, -1e8)
+ assert_frame_equal(zero_filled, tsframe.fillna(-1e8))
+ assert_frame_equal(zero_filled.replace(-1e8, nan), tsframe)
+
+ tsframe['A'][:5] = nan
+ tsframe['A'][-5:] = nan
+ tsframe['B'][:5] = -1e8
+
+ b = tsframe['B']
+ b[b == -1e8] = nan
+ tsframe['B'] = b
+ result = tsframe.fillna(method='bfill')
+ assert_frame_equal(result, tsframe.fillna(method='bfill'))
+
def test_replace_dtypes(self):
# int
df = DataFrame({'ints': [1, 2, 3]})
@@ -4852,6 +5253,16 @@ def test_replace_dtypes(self):
expected = DataFrame({'ints': [0, 2, 3]})
assert_frame_equal(result, expected)
+ df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int32)
+ result = df.replace(1, 0)
+ expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int32)
+ assert_frame_equal(result, expected)
+
+ df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int16)
+ result = df.replace(1, 0)
+ expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int16)
+ assert_frame_equal(result, expected)
+
# bools
df = DataFrame({'bools': [True, False, True]})
result = df.replace(False, True)
@@ -4961,6 +5372,16 @@ def test_replace_limit(self):
assert_frame_equal(padded, self.tsframe.fillna(method='bfill',
axis=1, limit=2))
+ def test_combine_multiple_frames_dtypes(self):
+ from pandas import concat
+
+ # GH 2759
+ A = DataFrame(data=np.ones((10, 2)), columns=['foo', 'bar'], dtype=np.float64)
+ B = DataFrame(data=np.ones((10, 2)), dtype=np.float32)
+ results = concat((A, B), axis=1).get_dtype_counts()
+ expected = Series(dict( float64 = 2, float32 = 2 ))
+ assert_series_equal(results,expected)
+
def test_truncate(self):
offset = datetools.bday
@@ -5324,6 +5745,15 @@ def test_align(self):
method=None, fill_value=0)
self.assert_(bf.index.equals(Index([])))
+ # mixed floats/ints
+ af, bf = self.mixed_float.align(other.ix[:, 0], join='inner', axis=1,
+ method=None, fill_value=0)
+ self.assert_(bf.index.equals(Index([])))
+
+ af, bf = self.mixed_int.align(other.ix[:, 0], join='inner', axis=1,
+ method=None, fill_value=0)
+ self.assert_(bf.index.equals(Index([])))
+
# try to align dataframe to series along bad axis
self.assertRaises(ValueError, self.frame.align, af.ix[0, :3],
join='inner', axis=2)
@@ -5405,8 +5835,9 @@ def _check_align_fill(self, kind, meth, ax, fax):
def test_align_int_fill_bug(self):
# GH #910
- X = np.random.rand(10, 10)
+ X = np.arange(10*10, dtype='float64').reshape(10, 10)
Y = np.ones((10, 1), dtype=int)
+
df1 = DataFrame(X)
df1['0.X'] = Y.squeeze()
@@ -5417,45 +5848,130 @@ def test_align_int_fill_bug(self):
assert_frame_equal(result, expected)
def test_where(self):
- df = DataFrame(np.random.randn(5, 3))
- cond = df > 0
-
- other1 = df + 1
- rs = df.where(cond, other1)
- rs2 = df.where(cond.values, other1)
- for k, v in rs.iteritems():
- assert_series_equal(v, np.where(cond[k], df[k], other1[k]))
- assert_frame_equal(rs, rs2)
-
- # it works!
- rs = df.where(cond[1:], other1)
-
- other2 = (df + 1).values
- rs = df.where(cond, other2)
- for k, v in rs.iteritems():
- assert_series_equal(v, np.where(cond[k], df[k], other2[:, k]))
-
- other5 = np.nan
- rs = df.where(cond, other5)
- for k, v in rs.iteritems():
- assert_series_equal(v, np.where(cond[k], df[k], other5))
+ default_frame = DataFrame(np.random.randn(5, 3),columns=['A','B','C'])
+
+ def _safe_add(df):
+ # only add to the numeric items
+ return DataFrame(dict([ (c,s+1) if issubclass(s.dtype.type, (np.integer,np.floating)) else (c,s) for c, s in df.iteritems() ]))
+
+ def _check_get(df, cond, check_dtypes = True):
+ other1 = _safe_add(df)
+ rs = df.where(cond, other1)
+ rs2 = df.where(cond.values, other1)
+ for k, v in rs.iteritems():
+ assert_series_equal(v, np.where(cond[k], df[k], other1[k]))
+ assert_frame_equal(rs, rs2)
+
+ # dtypes
+ if check_dtypes:
+ self.assert_((rs.dtypes == df.dtypes).all() == True)
+
+
+ # check getting
+ for df in [ default_frame, self.mixed_frame, self.mixed_float, self.mixed_int ]:
+ cond = df > 0
+ _check_get(df, cond)
+
+ # aligning
+ def _check_align(df, cond, other, check_dtypes = True):
+ rs = df.where(cond, other)
+ for i, k in enumerate(rs.columns):
+ v = rs[k]
+ d = df[k].values
+ c = cond[k].reindex(df[k].index).fillna(False).values
+
+ if np.isscalar(other):
+ o = other
+ else:
+ if isinstance(other,np.ndarray):
+ o = Series(other[:,i],index=v.index).values
+ else:
+ o = other[k].values
+
+ assert_series_equal(v, Series(np.where(c, d, o),index=v.index))
+
+ # dtypes
+ # can't check dtype when other is an ndarray
+ if check_dtypes and not isinstance(other,np.ndarray):
+ self.assert_((rs.dtypes == df.dtypes).all() == True)
+
+ for df in [ self.mixed_frame, self.mixed_float, self.mixed_int ]:
+
+ # other is a frame
+ cond = (df > 0)[1:]
+ _check_align(df, cond, _safe_add(df))
+
+ # check other is ndarray
+ cond = df > 0
+ _check_align(df, cond, (_safe_add(df).values))
+
+ # integers are upcast, so don't check the dtypes
+ cond = df > 0
+ check_dtypes = all([ not issubclass(s.type,np.integer) for s in df.dtypes ])
+ _check_align(df, cond, np.nan, check_dtypes = check_dtypes)
+ # invalid conditions
+ df = default_frame
err1 = (df + 1).values[0:2, :]
self.assertRaises(ValueError, df.where, cond, err1)
err2 = cond.ix[:2, :].values
+ other1 = _safe_add(df)
self.assertRaises(ValueError, df.where, err2, other1)
- # invalid conditions
self.assertRaises(ValueError, df.mask, True)
self.assertRaises(ValueError, df.mask, 0)
# where inplace
- df = DataFrame(np.random.randn(5, 3))
+ def _check_set(df, cond, check_dtypes = True):
+ dfi = df.copy()
+ econd = cond.reindex_like(df).fillna(True)
+ expected = dfi.mask(~econd)
+ dfi.where(cond, np.nan, inplace=True)
+ assert_frame_equal(dfi, expected)
- expected = df.mask(df < 0)
- df.where(df >= 0, np.nan, inplace=True)
- assert_frame_equal(df, expected)
+ # dtypes (and confirm upcasts)x
+ if check_dtypes:
+ for k, v in df.dtypes.iteritems():
+ if issubclass(v.type,np.integer):
+ v = np.dtype('float64')
+ self.assert_(dfi[k].dtype == v)
+
+ for df in [ default_frame, self.mixed_frame, self.mixed_float, self.mixed_int ]:
+
+ cond = df > 0
+ _check_set(df, cond)
+
+ cond = df >= 0
+ _check_set(df, cond)
+
+ # aligining
+ cond = (df >= 0)[1:]
+ _check_set(df, cond)
+
+ def test_where_bug(self):
+
+ # GH 2793
+
+ df = DataFrame({'a': [1.0, 2.0, 3.0, 4.0], 'b': [4.0, 3.0, 2.0, 1.0]}, dtype = 'float64')
+ expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [4.0, 3.0, np.nan, np.nan]}, dtype = 'float64')
+ result = df.where(df > 2, np.nan)
+ assert_frame_equal(result, expected)
+
+ result = df.copy()
+ result.where(result > 2, np.nan, inplace=True)
+ assert_frame_equal(result, expected)
+
+ # mixed
+ for dtype in ['int16','int8','int32','int64']:
+ df = DataFrame({'a': np.array([1, 2, 3, 4],dtype=dtype), 'b': np.array([4.0, 3.0, 2.0, 1.0], dtype = 'float64') })
+ expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [4.0, 3.0, np.nan, np.nan]}, dtype = 'float64')
+ result = df.where(df > 2, np.nan)
+ assert_frame_equal(result, expected)
+
+ result = df.copy()
+ result.where(result > 2, np.nan, inplace=True)
+ assert_frame_equal(result, expected)
def test_mask(self):
df = DataFrame(np.random.randn(5, 3))
@@ -5568,6 +6084,13 @@ def test_diff(self):
rs = DataFrame({'s': s}).diff()
self.assertEqual(rs.s[1], 1)
+ # mixed numeric
+ tf = self.tsframe.astype('float32')
+ the_diff = tf.diff(1)
+ assert_series_equal(the_diff['A'],
+ tf['A'] - tf['A'].shift(1))
+
+
def test_diff_mixed_dtype(self):
df = DataFrame(np.random.randn(5, 3))
df['A'] = np.array([1, 2, 3, 4, 5], dtype=object)
@@ -5938,7 +6461,7 @@ def test_apply_convert_objects(self):
'F': np.random.randn(11)})
result = data.apply(lambda x: x, axis=1)
- assert_frame_equal(result, data)
+ assert_frame_equal(result.convert_objects(), data)
def test_apply_attach_name(self):
result = self.frame.apply(lambda x: x.name)
@@ -6484,11 +7007,16 @@ def test_get_X_columns(self):
['a', 'e']))
def test_get_numeric_data(self):
- df = DataFrame({'a': 1., 'b': 2, 'c': 'foo'},
+
+ #df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'd' : np.array(1.*10.,dtype='float32'), 'e' : np.array(1*10,dtype='int32')},
+ # index=np.arange(10))
+ df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'd' : np.array([1.]*10,dtype='float32'),
+ 'e' : np.array([1]*10,dtype='int32'),
+ 'f' : np.array([1]*10,dtype='int16')},
index=np.arange(10))
result = df._get_numeric_data()
- expected = df.ix[:, ['a', 'b']]
+ expected = df.ix[:, ['a', 'b','d','e','f']]
assert_frame_equal(result, expected)
only_obj = df.ix[:, ['c']]
@@ -6500,7 +7028,8 @@ def test_count(self):
f = lambda s: notnull(s).sum()
self._check_stat_op('count', f,
has_skipna=False,
- has_numeric_only=True)
+ has_numeric_only=True,
+ check_dtypes=False)
# corner case
frame = DataFrame()
@@ -6529,6 +7058,11 @@ def test_count(self):
def test_sum(self):
self._check_stat_op('sum', np.sum, has_numeric_only=True)
+ def test_sum_mixed_numeric(self):
+ raise nose.SkipTest
+ # mixed types
+ self._check_stat_op('sum', np.sum, frame = self.mixed_float, has_numeric_only=True)
+
def test_stat_operators_attempt_obj_array(self):
data = {
'a': [-0.00049987540199591344, -0.0016467257772919831,
@@ -6679,7 +7213,7 @@ def alt(x):
assert_series_equal(df.kurt(), df.kurt(level=0).xs('bar'))
def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
- has_numeric_only=False):
+ has_numeric_only=False, check_dtypes=True):
if frame is None:
frame = self.frame
# set some NAs
@@ -6713,6 +7247,12 @@ def wrapper(x):
assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1),
check_dtype=False)
+ # check dtypes
+ if check_dtypes:
+ lcd_dtype = frame.values.dtype
+ self.assert_(lcd_dtype == result0.dtype)
+ self.assert_(lcd_dtype == result1.dtype)
+
# result = f(axis=1)
# comp = frame.apply(alternative, axis=1).reindex(result.index)
# assert_series_equal(result, comp)
@@ -6788,7 +7328,7 @@ def wrapper(x):
return np.nan
return np.median(x)
- self._check_stat_op('median', wrapper, frame=self.intframe)
+ self._check_stat_op('median', wrapper, frame=self.intframe, check_dtypes=False)
def test_quantile(self):
from pandas.compat.scipy import scoreatpercentile
@@ -6856,6 +7396,11 @@ def test_cumprod(self):
df.cumprod(0)
df.cumprod(1)
+ # ints32
+ df = self.tsframe.fillna(0).astype(np.int32)
+ df.cumprod(0)
+ df.cumprod(1)
+
def test_rank(self):
from pandas.compat.scipy import rankdata
@@ -7367,6 +7912,40 @@ def test_as_matrix_numeric_cols(self):
values = self.frame.as_matrix(['A', 'B', 'C', 'D'])
self.assert_(values.dtype == np.float64)
+ def test_as_matrix_lcd(self):
+
+ # mixed lcd
+ values = self.mixed_float.as_matrix(['A', 'B', 'C', 'D'])
+ self.assert_(values.dtype == np.float64)
+
+ values = self.mixed_float.as_matrix(['A', 'B', 'C' ])
+ self.assert_(values.dtype == np.float32)
+
+ values = self.mixed_float.as_matrix(['C'])
+ self.assert_(values.dtype == np.float16)
+
+ values = self.mixed_int.as_matrix(['A','B','C','D'])
+ self.assert_(values.dtype == np.uint64)
+
+ values = self.mixed_int.as_matrix(['A','D'])
+ self.assert_(values.dtype == np.int64)
+
+ # guess all ints are cast to uints....
+ values = self.mixed_int.as_matrix(['A','B','C'])
+ self.assert_(values.dtype == np.uint64)
+
+ values = self.mixed_int.as_matrix(['A','C'])
+ self.assert_(values.dtype == np.int32)
+
+ values = self.mixed_int.as_matrix(['C','D'])
+ self.assert_(values.dtype == np.int64)
+
+ values = self.mixed_int.as_matrix(['A'])
+ self.assert_(values.dtype == np.int32)
+
+ values = self.mixed_int.as_matrix(['C'])
+ self.assert_(values.dtype == np.uint8)
+
def test_constructor_frame_copy(self):
cop = DataFrame(self.frame, copy=True)
cop['A'] = 5
@@ -7404,6 +7983,10 @@ def test_cast_internals(self):
expected = DataFrame(self.frame._series, dtype=int)
assert_frame_equal(casted, expected)
+ casted = DataFrame(self.frame._data, dtype=np.int32)
+ expected = DataFrame(self.frame._series, dtype=np.int32)
+ assert_frame_equal(casted, expected)
+
def test_consolidate(self):
self.frame['E'] = 7.
consolidated = self.frame.consolidate()
@@ -7475,7 +8058,7 @@ def test_xs_view(self):
def test_boolean_indexing(self):
idx = range(3)
- cols = range(3)
+ cols = ['A','B','C']
df1 = DataFrame(index=idx, columns=cols,
data=np.array([[0.0, 0.5, 1.0],
[1.5, 2.0, 2.5],
@@ -7512,15 +8095,29 @@ def test_take(self):
# mixed-dtype
#----------------------------------------
order = [4, 1, 2, 0, 3]
+ for df in [self.mixed_frame]:
- result = self.mixed_frame.take(order, axis=0)
- expected = self.mixed_frame.reindex(self.mixed_frame.index.take(order))
- assert_frame_equal(result, expected)
+ result = df.take(order, axis=0)
+ expected = df.reindex(df.index.take(order))
+ assert_frame_equal(result, expected)
+
+ # axis = 1
+ result = df.take(order, axis=1)
+ expected = df.ix[:, ['foo', 'B', 'C', 'A', 'D']]
+ assert_frame_equal(result, expected)
- # axis = 1
- result = self.mixed_frame.take(order, axis=1)
- expected = self.mixed_frame.ix[:, ['foo', 'B', 'C', 'A', 'D']]
- assert_frame_equal(result, expected)
+ # by dtype
+ order = [1, 2, 0, 3]
+ for df in [self.mixed_float,self.mixed_int]:
+
+ result = df.take(order, axis=0)
+ expected = df.reindex(df.index.take(order))
+ assert_frame_equal(result, expected)
+
+ # axis = 1
+ result = df.take(order, axis=1)
+ expected = df.ix[:, ['B', 'C', 'A', 'D']]
+ assert_frame_equal(result, expected)
def test_iterkv_names(self):
for k, v in self.mixed_frame.iterkv():
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 114697bc5c8cd..54d29263b2308 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -62,6 +62,13 @@ def setUp(self):
'C': np.random.randn(8),
'D': np.random.randn(8)})
+ self.df_mixed_floats = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'three',
+ 'two', 'two', 'one', 'three'],
+ 'C': np.random.randn(8),
+ 'D': np.array(np.random.randn(8),dtype='float32')})
+
index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
['one', 'two', 'three']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
@@ -155,6 +162,25 @@ def test_first_last_nth(self):
self.assert_(com.isnull(grouped['B'].last()['foo']))
self.assert_(com.isnull(grouped['B'].nth(0)['foo']))
+ def test_first_last_nth_dtypes(self):
+ # tests for first / last / nth
+
+ grouped = self.df_mixed_floats.groupby('A')
+ first = grouped.first()
+ expected = self.df_mixed_floats.ix[[1, 0], ['B', 'C', 'D']]
+ expected.index = ['bar', 'foo']
+ assert_frame_equal(first, expected)
+
+ last = grouped.last()
+ expected = self.df_mixed_floats.ix[[5, 7], ['B', 'C', 'D']]
+ expected.index = ['bar', 'foo']
+ assert_frame_equal(last, expected)
+
+ nth = grouped.nth(1)
+ expected = self.df_mixed_floats.ix[[3, 2], ['B', 'C', 'D']]
+ expected.index = ['bar', 'foo']
+ assert_frame_equal(nth, expected)
+
def test_grouper_iter(self):
self.assertEqual(sorted(self.df.groupby('A').grouper), ['bar', 'foo'])
@@ -478,16 +504,30 @@ def test_transform_function_aliases(self):
def test_with_na(self):
index = Index(np.arange(10))
- values = Series(np.ones(10), index)
- labels = Series([nan, 'foo', 'bar', 'bar', nan, nan, 'bar',
- 'bar', nan, 'foo'], index=index)
- grouped = values.groupby(labels)
- agged = grouped.agg(len)
- expected = Series([4, 2], index=['bar', 'foo'])
+ for dtype in ['float64','float32','int64','int32','int16','int8']:
+ values = Series(np.ones(10), index, dtype=dtype)
+ labels = Series([nan, 'foo', 'bar', 'bar', nan, nan, 'bar',
+ 'bar', nan, 'foo'], index=index)
+
+
+ # this SHOULD be an int
+ grouped = values.groupby(labels)
+ agged = grouped.agg(len)
+ expected = Series([4, 2], index=['bar', 'foo'])
+
+ assert_series_equal(agged, expected, check_dtype=False)
+ #self.assert_(issubclass(agged.dtype.type, np.integer))
- assert_series_equal(agged, expected, check_dtype=False)
- self.assert_(issubclass(agged.dtype.type, np.integer))
+ # explicity return a float from my function
+ def f(x):
+ return float(len(x))
+
+ agged = grouped.agg(f)
+ expected = Series([4, 2], index=['bar', 'foo'])
+
+ assert_series_equal(agged, expected, check_dtype=False)
+ self.assert_(issubclass(agged.dtype.type, np.dtype(dtype).type))
def test_attr_wrapper(self):
grouped = self.ts.groupby(lambda x: x.weekday())
@@ -1596,6 +1636,7 @@ def test_series_grouper_noncontig_index(self):
grouped.agg(f)
def test_convert_objects_leave_decimal_alone(self):
+
from decimal import Decimal
s = Series(range(5))
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 9deddb802d1bf..f39a6d3b3feec 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -20,20 +20,20 @@ def assert_block_equal(left, right):
assert(left.ref_items.equals(right.ref_items))
-def get_float_mat(n, k):
- return np.repeat(np.atleast_2d(np.arange(k, dtype=float)), n, axis=0)
+def get_float_mat(n, k, dtype):
+ return np.repeat(np.atleast_2d(np.arange(k, dtype=dtype)), n, axis=0)
TEST_COLS = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
N = 10
-def get_float_ex(cols=['a', 'c', 'e']):
- floats = get_float_mat(N, len(cols)).T
+def get_float_ex(cols=['a', 'c', 'e'], dtype = np.float_):
+ floats = get_float_mat(N, len(cols), dtype = dtype).T
return make_block(floats, cols, TEST_COLS)
def get_complex_ex(cols=['h']):
- complexes = (get_float_mat(N, 1).T * 1j).astype(np.complex128)
+ complexes = (get_float_mat(N, 1, dtype = np.float_).T * 1j).astype(np.complex128)
return make_block(complexes, cols, TEST_COLS)
@@ -49,13 +49,8 @@ def get_bool_ex(cols=['f']):
return make_block(mat.T, cols, TEST_COLS)
-def get_int_ex(cols=['g']):
- mat = randn(N, 1).astype(int)
- return make_block(mat.T, cols, TEST_COLS)
-
-
-def get_int32_ex(cols):
- mat = randn(N, 1).astype(np.int32)
+def get_int_ex(cols=['g'], dtype = np.int_):
+ mat = randn(N, 1).astype(dtype)
return make_block(mat.T, cols, TEST_COLS)
@@ -63,6 +58,16 @@ def get_dt_ex(cols=['h']):
mat = randn(N, 1).astype(int).astype('M8[ns]')
return make_block(mat.T, cols, TEST_COLS)
+def create_blockmanager(blocks):
+ l = []
+ for b in blocks:
+ l.extend(b.items)
+ items = Index(l)
+ for b in blocks:
+ b.ref_items = items
+
+ index_sz = blocks[0].values.shape[1]
+ return BlockManager(blocks, [items, np.arange(index_sz)])
class TestBlock(unittest.TestCase):
@@ -76,8 +81,8 @@ def setUp(self):
self.int_block = get_int_ex()
def test_constructor(self):
- int32block = get_int32_ex(['a'])
- self.assert_(int32block.dtype == np.int64)
+ int32block = get_int_ex(['a'],dtype = np.int32)
+ self.assert_(int32block.dtype == np.int32)
def test_pickle(self):
import pickle
@@ -235,12 +240,7 @@ def test_attrs(self):
def test_is_mixed_dtype(self):
self.assert_(self.mgr.is_mixed_dtype())
- items = Index(['a', 'b'])
- blocks = [get_bool_ex(['a']), get_bool_ex(['b'])]
- for b in blocks:
- b.ref_items = items
-
- mgr = BlockManager(blocks, [items, np.arange(N)])
+ mgr = create_blockmanager([get_bool_ex(['a']), get_bool_ex(['b'])])
self.assert_(not mgr.is_mixed_dtype())
def test_is_indexed_like(self):
@@ -254,9 +254,12 @@ def test_block_id_vector_item_dtypes(self):
assert_almost_equal(expected, result)
result = self.mgr.item_dtypes
+
+ # as the platform may not exactly match this, pseudo match
expected = ['float64', 'object', 'float64', 'object', 'float64',
'bool', 'int64', 'complex128']
- self.assert_(np.array_equal(result, expected))
+ for e, r in zip(expected, result):
+ np.dtype(e).kind == np.dtype(r).kind
def test_duplicate_item_failure(self):
items = Index(['a', 'a'])
@@ -315,7 +318,7 @@ def test_set_change_dtype(self):
self.assert_(mgr2.get('baz').dtype == np.object_)
mgr2.set('quux', randn(N).astype(int))
- self.assert_(mgr2.get('quux').dtype == np.int64)
+ self.assert_(mgr2.get('quux').dtype == np.int_)
mgr2.set('quux', randn(N))
self.assert_(mgr2.get('quux').dtype == np.float_)
@@ -326,36 +329,110 @@ def test_copy(self):
for cp_blk, blk in zip(shallow.blocks, self.mgr.blocks):
self.assert_(cp_blk.values is blk.values)
- def test_as_matrix(self):
- pass
+ def test_as_matrix_float(self):
+
+ mgr = create_blockmanager([get_float_ex(['c'],np.float32), get_float_ex(['d'],np.float16), get_float_ex(['e'],np.float64)])
+ self.assert_(mgr.as_matrix().dtype == np.float64)
+
+ mgr = create_blockmanager([get_float_ex(['c'],np.float32), get_float_ex(['d'],np.float16)])
+ self.assert_(mgr.as_matrix().dtype == np.float32)
def test_as_matrix_int_bool(self):
- items = Index(['a', 'b'])
- blocks = [get_bool_ex(['a']), get_bool_ex(['b'])]
- for b in blocks:
- b.ref_items = items
- index_sz = blocks[0].values.shape[1]
- mgr = BlockManager(blocks, [items, np.arange(index_sz)])
+ mgr = create_blockmanager([get_bool_ex(['a']), get_bool_ex(['b'])])
self.assert_(mgr.as_matrix().dtype == np.bool_)
- blocks = [get_int_ex(['a']), get_int_ex(['b'])]
- for b in blocks:
- b.ref_items = items
-
- mgr = BlockManager(blocks, [items, np.arange(index_sz)])
+ mgr = create_blockmanager([get_int_ex(['a'],np.int64), get_int_ex(['b'],np.int64), get_int_ex(['c'],np.int32), get_int_ex(['d'],np.int16), get_int_ex(['e'],np.uint8) ])
self.assert_(mgr.as_matrix().dtype == np.int64)
- def test_as_matrix_datetime(self):
- items = Index(['h', 'g'])
- blocks = [get_dt_ex(['h']), get_dt_ex(['g'])]
- for b in blocks:
- b.ref_items = items
+ mgr = create_blockmanager([get_int_ex(['c'],np.int32), get_int_ex(['d'],np.int16), get_int_ex(['e'],np.uint8) ])
+ self.assert_(mgr.as_matrix().dtype == np.int32)
- index_sz = blocks[0].values.shape[1]
- mgr = BlockManager(blocks, [items, np.arange(index_sz)])
+ def test_as_matrix_datetime(self):
+ mgr = create_blockmanager([get_dt_ex(['h']), get_dt_ex(['g'])])
self.assert_(mgr.as_matrix().dtype == 'M8[ns]')
+ def test_astype(self):
+
+ # coerce all
+ mgr = create_blockmanager([get_float_ex(['c'],np.float32), get_float_ex(['d'],np.float16), get_float_ex(['e'],np.float64)])
+
+ for t in ['float16','float32','float64','int32','int64']:
+ tmgr = mgr.astype(t)
+ self.assert_(tmgr.as_matrix().dtype == np.dtype(t))
+
+ # mixed
+ mgr = create_blockmanager([get_obj_ex(['a','b']),get_bool_ex(['c']),get_dt_ex(['d']),get_float_ex(['e'],np.float32), get_float_ex(['f'],np.float16), get_float_ex(['g'],np.float64)])
+ for t in ['float16','float32','float64','int32','int64']:
+ tmgr = mgr.astype(t, raise_on_error = False).get_numeric_data()
+ self.assert_(tmgr.as_matrix().dtype == np.dtype(t))
+
+ def test_convert(self):
+
+ def _compare(old_mgr, new_mgr):
+ """ compare the blocks, numeric compare ==, object don't """
+ old_blocks = set(old_mgr.blocks)
+ new_blocks = set(new_mgr.blocks)
+ self.assert_(len(old_blocks) == len(new_blocks))
+
+ # compare non-numeric
+ for b in old_blocks:
+ found = False
+ for nb in new_blocks:
+ if (b.values == nb.values).all():
+ found = True
+ break
+ self.assert_(found == True)
+
+ for b in new_blocks:
+ found = False
+ for ob in old_blocks:
+ if (b.values == ob.values).all():
+ found = True
+ break
+ self.assert_(found == True)
+
+ # noops
+ mgr = create_blockmanager([get_int_ex(['f']), get_float_ex(['g'])])
+ new_mgr = mgr.convert()
+ _compare(mgr,new_mgr)
+
+ mgr = create_blockmanager([get_obj_ex(['a','b']), get_int_ex(['f']), get_float_ex(['g'])])
+ new_mgr = mgr.convert()
+ _compare(mgr,new_mgr)
+
+ # there could atcually be multiple dtypes resulting
+ def _check(new_mgr,block_type, citems):
+ items = set()
+ for b in new_mgr.blocks:
+ if isinstance(b,block_type):
+ for i in list(b.items):
+ items.add(i)
+ self.assert_(items == set(citems))
+
+ # convert
+ mat = np.empty((N, 3), dtype=object)
+ mat[:, 0] = '1'
+ mat[:, 1] = '2.'
+ mat[:, 2] = 'foo'
+ b = make_block(mat.T, ['a','b','foo'], TEST_COLS)
+
+ mgr = create_blockmanager([b, get_int_ex(['f']), get_float_ex(['g'])])
+ new_mgr = mgr.convert(convert_numeric = True)
+
+ _check(new_mgr,FloatBlock,['b','g'])
+ _check(new_mgr,IntBlock,['a','f'])
+
+ mgr = create_blockmanager([b, get_int_ex(['f'],np.int32), get_bool_ex(['bool']), get_dt_ex(['dt']),
+ get_int_ex(['i'],np.int64), get_float_ex(['g'],np.float64), get_float_ex(['h'],np.float16)])
+ new_mgr = mgr.convert(convert_numeric = True)
+
+ _check(new_mgr,FloatBlock,['b','g','h'])
+ _check(new_mgr,IntBlock,['a','f','i'])
+ _check(new_mgr,ObjectBlock,['foo'])
+ _check(new_mgr,BoolBlock,['bool'])
+ _check(new_mgr,DatetimeBlock,['dt'])
+
def test_xs(self):
pass
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index a4df141fefef9..87b820faa3dc8 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -616,12 +616,22 @@ def test_sortlevel(self):
assert_frame_equal(rs, self.frame.sortlevel(0))
def test_sortlevel_large_cardinality(self):
- # #2684
+
+ # #2684 (int64)
+ index = MultiIndex.from_arrays([np.arange(4000)]*3)
+ df = DataFrame(np.random.randn(4000), index=index, dtype = np.int64)
+
+ # it works!
+ result = df.sortlevel(0)
+ self.assertTrue(result.index.lexsort_depth == 3)
+
+ # #2684 (int32)
index = MultiIndex.from_arrays([np.arange(4000)]*3)
- df = DataFrame(np.random.randn(4000), index=index)
+ df = DataFrame(np.random.randn(4000), index=index, dtype = np.int32)
# it works!
result = df.sortlevel(0)
+ self.assert_((result.dtypes.values == df.dtypes.values).all() == True)
self.assertTrue(result.index.lexsort_depth == 3)
def test_delevel_infer_dtype(self):
@@ -723,7 +733,7 @@ def test_count_level_corner(self):
df = self.frame[:0]
result = df.count(level=0)
expected = DataFrame({}, index=s.index.levels[0],
- columns=df.columns).fillna(0).astype(int)
+ columns=df.columns).fillna(0).astype(np.int64)
assert_frame_equal(result, expected)
def test_unstack(self):
@@ -734,6 +744,9 @@ def test_unstack(self):
# test that ints work
unstacked = self.ymd.astype(int).unstack()
+ # test that int32 work
+ unstacked = self.ymd.astype(np.int32).unstack()
+
def test_unstack_multiple_no_empty_columns(self):
index = MultiIndex.from_tuples([(0, 'foo', 0), (0, 'bar', 0),
(1, 'baz', 1), (1, 'qux', 1)])
diff --git a/pandas/tests/test_ndframe.py b/pandas/tests/test_ndframe.py
index e017bf07039d7..0c004884c5559 100644
--- a/pandas/tests/test_ndframe.py
+++ b/pandas/tests/test_ndframe.py
@@ -24,7 +24,10 @@ def test_ndim(self):
def test_astype(self):
casted = self.ndf.astype(int)
- self.assert_(casted.values.dtype == np.int64)
+ self.assert_(casted.values.dtype == np.int_)
+
+ casted = self.ndf.astype(np.int32)
+ self.assert_(casted.values.dtype == np.int32)
if __name__ == '__main__':
import nose
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 235b3e153574c..07a02f18d8337 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -418,7 +418,7 @@ def test_setitem(self):
# scalar
self.panel['ItemG'] = 1
self.panel['ItemE'] = True
- self.assert_(self.panel['ItemG'].values.dtype == np.int64)
+ self.assert_(self.panel['ItemG'].values.dtype == np.int_)
self.assert_(self.panel['ItemE'].values.dtype == np.bool_)
# object dtype
@@ -782,6 +782,13 @@ def test_constructor_cast(self):
assert_almost_equal(casted.values, exp_values)
assert_almost_equal(casted2.values, exp_values)
+ casted = Panel(zero_filled._data, dtype=np.int32)
+ casted2 = Panel(zero_filled.values, dtype=np.int32)
+
+ exp_values = zero_filled.values.astype(np.int32)
+ assert_almost_equal(casted.values, exp_values)
+ assert_almost_equal(casted2.values, exp_values)
+
# can't cast
data = [[['foo', 'bar', 'baz']]]
self.assertRaises(ValueError, Panel, data, dtype=float)
@@ -798,6 +805,30 @@ def test_constructor_observe_dtype(self):
minor_axis=range(3), dtype='O')
self.assert_(panel.values.dtype == np.object_)
+ def test_constructor_dtypes(self):
+ # GH #797
+
+ def _check_dtype(panel, dtype):
+ for i in panel.items:
+ self.assert_(panel[i].values.dtype.name == dtype)
+
+ # only nan holding types allowed here
+ for dtype in ['float64','float32','object']:
+ panel = Panel(items=range(2),major_axis=range(10),minor_axis=range(5),dtype=dtype)
+ _check_dtype(panel,dtype)
+
+ for dtype in ['float64','float32','int64','int32','object']:
+ panel = Panel(np.array(np.random.randn(2,10,5),dtype=dtype),items=range(2),major_axis=range(10),minor_axis=range(5),dtype=dtype)
+ _check_dtype(panel,dtype)
+
+ for dtype in ['float64','float32','int64','int32','object']:
+ panel = Panel(np.array(np.random.randn(2,10,5),dtype='O'),items=range(2),major_axis=range(10),minor_axis=range(5),dtype=dtype)
+ _check_dtype(panel,dtype)
+
+ for dtype in ['float64','float32','int64','int32','object']:
+ panel = Panel(np.random.randn(2,10,5),items=range(2),major_axis=range(10),minor_axis=range(5),dtype=dtype)
+ _check_dtype(panel,dtype)
+
def test_consolidate(self):
self.assert_(self.panel._data.is_consolidated())
@@ -845,6 +876,11 @@ def test_ctor_dict(self):
for k, v in dcasted.iteritems()))
assert_panel_equal(result, expected)
+ result = Panel(dcasted, dtype=np.int32)
+ expected = Panel(dict((k, v.astype(np.int32))
+ for k, v in dcasted.iteritems()))
+ assert_panel_equal(result, expected)
+
def test_constructor_dict_mixed(self):
data = dict((k, v.values) for k, v in self.panel.iterkv())
result = Panel(data)
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index e0180f475ca45..87bfba7c55cce 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -358,7 +358,7 @@ def test_setitem(self):
# scalar
self.panel4d['lG'] = 1
self.panel4d['lE'] = True
- self.assert_(self.panel4d['lG'].values.dtype == np.int64)
+ self.assert_(self.panel4d['lG'].values.dtype == np.int_)
self.assert_(self.panel4d['lE'].values.dtype == np.bool_)
# object dtype
@@ -592,6 +592,13 @@ def test_constructor_cast(self):
assert_almost_equal(casted.values, exp_values)
assert_almost_equal(casted2.values, exp_values)
+ casted = Panel4D(zero_filled._data, dtype=np.int32)
+ casted2 = Panel4D(zero_filled.values, dtype=np.int32)
+
+ exp_values = zero_filled.values.astype(np.int32)
+ assert_almost_equal(casted.values, exp_values)
+ assert_almost_equal(casted2.values, exp_values)
+
# can't cast
data = [[['foo', 'bar', 'baz']]]
self.assertRaises(ValueError, Panel, data, dtype=float)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 896c7dc34901f..3bae492d5cb81 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -137,7 +137,7 @@ def test_multilevel_name_print(self):
"qux one 7",
" two 8",
" three 9",
- "Name: sth"]
+ "Name: sth, Dtype: int64"]
expected = "\n".join(expected)
self.assertEquals(repr(s), expected)
@@ -2705,6 +2705,64 @@ def test_apply_dont_convert_dtype(self):
result = s.apply(f, convert_dtype=False)
self.assert_(result.dtype == object)
+ def test_convert_objects(self):
+
+ s = Series([1., 2, 3],index=['a','b','c'])
+ result = s.convert_objects(convert_dates=False,convert_numeric=True)
+ assert_series_equal(s,result)
+
+ # force numeric conversion
+ r = s.copy().astype('O')
+ r['a'] = '1'
+ result = r.convert_objects(convert_dates=False,convert_numeric=True)
+ assert_series_equal(s,result)
+
+ r = s.copy().astype('O')
+ r['a'] = '1.'
+ result = r.convert_objects(convert_dates=False,convert_numeric=True)
+ assert_series_equal(s,result)
+
+ r = s.copy().astype('O')
+ r['a'] = 'garbled'
+ expected = s.copy()
+ expected['a'] = np.nan
+ result = r.convert_objects(convert_dates=False,convert_numeric=True)
+ assert_series_equal(expected,result)
+
+ # dates
+ s = Series([datetime(2001,1,1,0,0), datetime(2001,1,2,0,0), datetime(2001,1,3,0,0) ])
+ s2 = Series([datetime(2001,1,1,0,0), datetime(2001,1,2,0,0), datetime(2001,1,3,0,0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'],dtype='O')
+
+ result = s.convert_objects(convert_dates=True,convert_numeric=False)
+ expected = Series([Timestamp('20010101'),Timestamp('20010102'),Timestamp('20010103')],dtype='M8[ns]')
+ assert_series_equal(expected,result)
+
+ result = s.convert_objects(convert_dates='coerce',convert_numeric=False)
+ assert_series_equal(expected,result)
+ result = s.convert_objects(convert_dates='coerce',convert_numeric=True)
+ assert_series_equal(expected,result)
+
+ expected = Series([Timestamp('20010101'),Timestamp('20010102'),Timestamp('20010103'),lib.NaT,lib.NaT,lib.NaT,Timestamp('20010104'),Timestamp('20010105')],dtype='M8[ns]')
+ result = s2.convert_objects(convert_dates='coerce',convert_numeric=False)
+ assert_series_equal(expected,result)
+ result = s2.convert_objects(convert_dates='coerce',convert_numeric=True)
+ assert_series_equal(expected,result)
+
+ # preserver all-nans (if convert_dates='coerce')
+ s = Series(['foo','bar',1,1.0],dtype='O')
+ result = s.convert_objects(convert_dates='coerce',convert_numeric=False)
+ assert_series_equal(result,s)
+
+ # preserver if non-object
+ s = Series([1],dtype='float32')
+ result = s.convert_objects(convert_dates='coerce',convert_numeric=False)
+ assert_series_equal(result,s)
+
+ #r = s.copy()
+ #r[0] = np.nan
+ #result = r.convert_objects(convert_dates=True,convert_numeric=False)
+ #self.assert_(result.dtype == 'M8[ns]')
+
def test_apply_args(self):
s = Series(['foo,bar'])
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
index 7e5341fd5b311..eaeb3325685ec 100644
--- a/pandas/tests/test_tseries.py
+++ b/pandas/tests/test_tseries.py
@@ -458,7 +458,9 @@ def test_generate_bins(self):
values, [-3, -1], 'right')
def test_group_bin_functions(self):
- funcs = ['add', 'mean', 'prod', 'min', 'max', 'var']
+
+ dtypes = ['float32','float64']
+ funcs = ['add', 'mean', 'prod', 'min', 'max', 'var']
np_funcs = {
'add': np.sum,
@@ -470,71 +472,82 @@ def test_group_bin_functions(self):
}
for fname in funcs:
- args = [getattr(algos, 'group_%s' % fname),
- getattr(algos, 'group_%s_bin' % fname),
- np_funcs[fname]]
- self._check_versions(*args)
-
- def _check_versions(self, irr_func, bin_func, np_func):
- obj = self.obj
+ for d in dtypes:
+ check_less_precise = False
+ if d == 'float32':
+ check_less_precise = True
+ args = [getattr(algos, 'group_%s_%s' % (fname,d)),
+ getattr(algos, 'group_%s_bin_%s' % (fname,d)),
+ np_funcs[fname],
+ d,
+ check_less_precise]
+ self._check_versions(*args)
+
+ def _check_versions(self, irr_func, bin_func, np_func, dtype, check_less_precise):
+ obj = self.obj.astype(dtype)
cts = np.zeros(3, dtype=np.int64)
- exp = np.zeros((3, 1), np.float64)
+ exp = np.zeros((3, 1), dtype)
irr_func(exp, cts, obj, self.labels)
# bin-based version
bins = np.array([3, 6], dtype=np.int64)
- out = np.zeros((3, 1), np.float64)
+ out = np.zeros((3, 1), dtype)
counts = np.zeros(len(out), dtype=np.int64)
bin_func(out, counts, obj, bins)
- assert_almost_equal(out, exp)
+ assert_almost_equal(out, exp, check_less_precise=check_less_precise)
bins = np.array([3, 9, 10], dtype=np.int64)
- out = np.zeros((3, 1), np.float64)
+ out = np.zeros((3, 1), dtype)
counts = np.zeros(len(out), dtype=np.int64)
bin_func(out, counts, obj, bins)
exp = np.array([np_func(obj[:3]), np_func(obj[3:9]),
np_func(obj[9:])],
- dtype=np.float64)
- assert_almost_equal(out.squeeze(), exp)
+ dtype=dtype)
+ assert_almost_equal(out.squeeze(), exp, check_less_precise=check_less_precise)
# duplicate bins
bins = np.array([3, 6, 10, 10], dtype=np.int64)
- out = np.zeros((4, 1), np.float64)
+ out = np.zeros((4, 1), dtype)
counts = np.zeros(len(out), dtype=np.int64)
bin_func(out, counts, obj, bins)
exp = np.array([np_func(obj[:3]), np_func(obj[3:6]),
np_func(obj[6:10]), np.nan],
- dtype=np.float64)
- assert_almost_equal(out.squeeze(), exp)
+ dtype=dtype)
+ assert_almost_equal(out.squeeze(), exp, check_less_precise=check_less_precise)
def test_group_ohlc():
- obj = np.random.randn(20)
- bins = np.array([6, 12], dtype=np.int64)
- out = np.zeros((3, 4), np.float64)
- counts = np.zeros(len(out), dtype=np.int64)
+ def _check(dtype):
+ obj = np.array(np.random.randn(20),dtype=dtype)
- algos.group_ohlc(out, counts, obj[:, None], bins)
+ bins = np.array([6, 12], dtype=np.int64)
+ out = np.zeros((3, 4), dtype)
+ counts = np.zeros(len(out), dtype=np.int64)
+
+ func = getattr(algos,'group_ohlc_%s' % dtype)
+ func(out, counts, obj[:, None], bins)
- def _ohlc(group):
- if isnull(group).all():
- return np.repeat(nan, 4)
- return [group[0], group.max(), group.min(), group[-1]]
+ def _ohlc(group):
+ if isnull(group).all():
+ return np.repeat(nan, 4)
+ return [group[0], group.max(), group.min(), group[-1]]
- expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]),
- _ohlc(obj[12:])])
+ expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]),
+ _ohlc(obj[12:])])
- assert_almost_equal(out, expected)
- assert_almost_equal(counts, [6, 6, 8])
+ assert_almost_equal(out, expected)
+ assert_almost_equal(counts, [6, 6, 8])
- obj[:6] = nan
- algos.group_ohlc(out, counts, obj[:, None], bins)
- expected[0] = nan
- assert_almost_equal(out, expected)
+ obj[:6] = nan
+ func(out, counts, obj[:, None], bins)
+ expected[0] = nan
+ assert_almost_equal(out, expected)
+ _check('float32')
+ _check('float64')
def test_try_parse_dates():
from dateutil.parser import parse
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index c058580ab0f45..3adfb38e6144b 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -2,6 +2,7 @@
SQL-style merge routines
"""
+import itertools
import numpy as np
from pandas.core.categorical import Factor
@@ -658,7 +659,7 @@ def _prepare_blocks(self):
join_blocks = unit.get_upcasted_blocks()
type_map = {}
for blk in join_blocks:
- type_map.setdefault(type(blk), []).append(blk)
+ type_map.setdefault(blk.dtype, []).append(blk)
blockmaps.append((unit, type_map))
return blockmaps
@@ -985,7 +986,8 @@ def _prepare_blocks(self):
blockmaps = []
for data in reindexed_data:
data = data.consolidate()
- type_map = dict((type(blk), blk) for blk in data.blocks)
+
+ type_map = dict((blk.dtype, blk) for blk in data.blocks)
blockmaps.append(type_map)
return blockmaps, reindexed_data
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 47ab02d892c3f..8820d43975885 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -287,7 +287,7 @@ def test_join_index_mixed(self):
df1 = DataFrame({'A': 1., 'B': 2, 'C': 'foo', 'D': True},
index=np.arange(10),
columns=['A', 'B', 'C', 'D'])
- self.assert_(df1['B'].dtype == np.int64)
+ self.assert_(df1['B'].dtype == np.int)
self.assert_(df1['D'].dtype == np.bool_)
df2 = DataFrame({'A': 1., 'B': 2, 'C': 'foo', 'D': True},
@@ -422,23 +422,27 @@ def test_join_hierarchical_mixed(self):
self.assertTrue('b' in result)
def test_join_float64_float32(self):
- a = DataFrame(randn(10, 2), columns=['a', 'b'])
- b = DataFrame(randn(10, 1), columns=['c']).astype(np.float32)
- joined = a.join(b)
- expected = a.join(b.astype('f8'))
- assert_frame_equal(joined, expected)
- joined = b.join(a)
- assert_frame_equal(expected, joined.reindex(columns=['a', 'b', 'c']))
+ a = DataFrame(randn(10, 2), columns=['a', 'b'], dtype = np.float64)
+ b = DataFrame(randn(10, 1), columns=['c'], dtype = np.float32)
+ joined = a.join(b)
+ self.assert_(joined.dtypes['a'] == 'float64')
+ self.assert_(joined.dtypes['b'] == 'float64')
+ self.assert_(joined.dtypes['c'] == 'float32')
- a = np.random.randint(0, 5, 100)
- b = np.random.random(100).astype('Float64')
- c = np.random.random(100).astype('Float32')
+ a = np.random.randint(0, 5, 100).astype('int64')
+ b = np.random.random(100).astype('float64')
+ c = np.random.random(100).astype('float32')
df = DataFrame({'a': a, 'b': b, 'c': c})
- xpdf = DataFrame({'a': a, 'b': b, 'c': c.astype('Float64')})
- s = DataFrame(np.random.random(5).astype('f'), columns=['md'])
+ xpdf = DataFrame({'a': a, 'b': b, 'c': c })
+ s = DataFrame(np.random.random(5).astype('float32'), columns=['md'])
rs = df.merge(s, left_on='a', right_index=True)
- xp = xpdf.merge(s.astype('f8'), left_on='a', right_index=True)
+ self.assert_(rs.dtypes['a'] == 'int64')
+ self.assert_(rs.dtypes['b'] == 'float64')
+ self.assert_(rs.dtypes['c'] == 'float32')
+ self.assert_(rs.dtypes['md'] == 'float32')
+
+ xp = xpdf.merge(s, left_on='a', right_index=True)
assert_frame_equal(rs, xp)
def test_join_many_non_unique_index(self):
@@ -591,7 +595,7 @@ def test_intelligently_handle_join_key(self):
np.nan, np.nan]),
'rvalue': np.array([0, 1, 0, 1, 2, 2, 3, 4, 5])},
columns=['value', 'key', 'rvalue'])
- assert_frame_equal(joined, expected)
+ assert_frame_equal(joined, expected, check_dtype=False)
self.assert_(joined._data.is_consolidated())
@@ -801,7 +805,25 @@ def test_left_join_index_preserve_order(self):
left = DataFrame({'k1': [0, 1, 2] * 8,
'k2': ['foo', 'bar'] * 12,
- 'v': np.arange(24)})
+ 'v': np.array(np.arange(24),dtype=np.int64) })
+
+ index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
+ right = DataFrame({'v2': [5, 7]}, index=index)
+
+ result = left.join(right, on=['k1', 'k2'])
+
+ expected = left.copy()
+ expected['v2'] = np.nan
+ expected['v2'][(expected.k1 == 2) & (expected.k2 == 'bar')] = 5
+ expected['v2'][(expected.k1 == 1) & (expected.k2 == 'foo')] = 7
+
+ tm.assert_frame_equal(result, expected)
+
+ # test join with multi dtypes blocks
+ left = DataFrame({'k1': [0, 1, 2] * 8,
+ 'k2': ['foo', 'bar'] * 12,
+ 'k3' : np.array([0, 1, 2]*8, dtype=np.float32),
+ 'v': np.array(np.arange(24),dtype=np.int32) })
index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
right = DataFrame({'v2': [5, 7]}, index=index)
@@ -820,6 +842,33 @@ def test_left_join_index_preserve_order(self):
right_on=['k1', 'k2'], how='right')
tm.assert_frame_equal(joined.ix[:, expected.columns], expected)
+ def test_join_multi_dtypes(self):
+
+ # test with multi dtypes in the join index
+ def _test(dtype1,dtype2):
+ left = DataFrame({'k1': np.array([0, 1, 2] * 8, dtype=dtype1),
+ 'k2': ['foo', 'bar'] * 12,
+ 'v': np.array(np.arange(24),dtype=np.int64) })
+
+ index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
+ right = DataFrame({'v2': np.array([5, 7], dtype=dtype2)}, index=index)
+
+ result = left.join(right, on=['k1', 'k2'])
+
+ expected = left.copy()
+
+ if dtype2.kind == 'i':
+ dtype2 = np.dtype('float64')
+ expected['v2'] = np.array(np.nan,dtype=dtype2)
+ expected['v2'][(expected.k1 == 2) & (expected.k2 == 'bar')] = 5
+ expected['v2'][(expected.k1 == 1) & (expected.k2 == 'foo')] = 7
+
+ tm.assert_frame_equal(result, expected)
+
+ for d1 in [np.int64,np.int32,np.int16,np.int8,np.uint8]:
+ for d2 in [np.int64,np.float64,np.float32,np.float16]:
+ _test(np.dtype(d1),np.dtype(d2))
+
def test_left_merge_na_buglet(self):
left = DataFrame({'id': list('abcde'), 'v1': randn(5),
'v2': randn(5), 'dummy': list('abcde'),
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 4d81119bd4a34..29b844d330af2 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -550,17 +550,20 @@ def test_resample_not_monotonic(self):
assert_series_equal(result, exp)
def test_resample_median_bug_1688(self):
- df = DataFrame([1, 2], index=[datetime(2012, 1, 1, 0, 0, 0),
- datetime(2012, 1, 1, 0, 5, 0)])
-
- result = df.resample("T", how=lambda x: x.mean())
- exp = df.asfreq('T')
- tm.assert_frame_equal(result, exp)
-
- result = df.resample("T", how="median")
- exp = df.asfreq('T')
- tm.assert_frame_equal(result, exp)
+ for dtype in ['int64','int32','float64','float32']:
+ df = DataFrame([1, 2], index=[datetime(2012, 1, 1, 0, 0, 0),
+ datetime(2012, 1, 1, 0, 5, 0)],
+ dtype = dtype)
+
+ result = df.resample("T", how=lambda x: x.mean())
+ exp = df.asfreq('T')
+ tm.assert_frame_equal(result, exp)
+
+ result = df.resample("T", how="median")
+ exp = df.asfreq('T')
+ tm.assert_frame_equal(result, exp)
+
def test_how_lambda_functions(self):
ts = _simple_ts('1/1/2000', '4/1/2000')
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index aa12d6142d6d8..861a8aa9d3a95 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -567,7 +567,8 @@ def test_series_repr_nat(self):
expected = ('0 1970-01-01 00:00:00\n'
'1 1970-01-01 00:00:00.000001\n'
'2 1970-01-01 00:00:00.000002\n'
- '3 NaT')
+ '3 NaT\n'
+ 'Dtype: datetime64[ns]')
self.assertEquals(result, expected)
def test_fillna_nat(self):
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index bbbe090225b83..200ab632e094e 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -774,7 +774,7 @@ def datetime_to_datetime64(ndarray[object] values):
def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
- format=None, utc=None):
+ format=None, utc=None, coerce=False):
cdef:
Py_ssize_t i, n = len(values)
object val
@@ -813,14 +813,16 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
_check_dts_bounds(iresult[i], &dts)
elif util.is_datetime64_object(val):
iresult[i] = _get_datetime64_nanos(val)
- elif util.is_integer_object(val):
+
+ # if we are coercing, dont' allow integers
+ elif util.is_integer_object(val) and not coerce:
iresult[i] = val
else:
- if len(val) == 0:
- iresult[i] = iNaT
- continue
-
try:
+ if len(val) == 0:
+ iresult[i] = iNaT
+ continue
+
_string_to_dts(val, &dts)
iresult[i] = pandas_datetimestruct_to_datetime(PANDAS_FR_ns,
&dts)
@@ -829,10 +831,19 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
try:
result[i] = parse(val, dayfirst=dayfirst)
except Exception:
+ if coerce:
+ iresult[i] = iNaT
+ continue
raise TypeError
pandas_datetime_to_datetimestruct(iresult[i], PANDAS_FR_ns,
&dts)
_check_dts_bounds(iresult[i], &dts)
+ except:
+ if coerce:
+ iresult[i] = iNaT
+ continue
+ raise
+
return result
except TypeError:
oresult = np.empty(n, dtype=object)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index a9a6bab893ac1..702ae7d5c72ef 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -88,7 +88,7 @@ def isiterable(obj):
return hasattr(obj, '__iter__')
-def assert_almost_equal(a, b):
+def assert_almost_equal(a, b, check_less_precise = False):
if isinstance(a, dict) or isinstance(b, dict):
return assert_dict_equal(a, b)
@@ -103,7 +103,7 @@ def assert_almost_equal(a, b):
return True
else:
for i in xrange(len(a)):
- assert_almost_equal(a[i], b[i])
+ assert_almost_equal(a[i], b[i], check_less_precise)
return True
err_msg = lambda a, b: 'expected %.5f but got %.5f' % (a, b)
@@ -112,16 +112,29 @@ def assert_almost_equal(a, b):
np.testing.assert_(isnull(b))
return
- if isinstance(a, (bool, float, int)):
+ if isinstance(a, (bool, float, int, np.float32)):
+ decimal = 5
+
+ # deal with differing dtypes
+ if check_less_precise:
+ dtype_a = np.dtype(a)
+ dtype_b = np.dtype(b)
+ if dtype_a.kind == 'i' and dtype_b == 'i':
+ pass
+ if dtype_a.kind == 'f' and dtype_b == 'f':
+ if dtype_a.itemsize <= 4 and dtype_b.itemsize <= 4:
+ decimal = 3
+
if np.isinf(a):
assert np.isinf(b), err_msg(a, b)
+
# case for zero
elif abs(a) < 1e-5:
np.testing.assert_almost_equal(
- a, b, decimal=5, err_msg=err_msg(a, b), verbose=False)
+ a, b, decimal=decimal, err_msg=err_msg(a, b), verbose=False)
else:
np.testing.assert_almost_equal(
- 1, a / b, decimal=5, err_msg=err_msg(a, b), verbose=False)
+ 1, a / b, decimal=decimal, err_msg=err_msg(a, b), verbose=False)
else:
assert(a == b)
@@ -144,10 +157,11 @@ def assert_dict_equal(a, b, compare_keys=True):
def assert_series_equal(left, right, check_dtype=True,
check_index_type=False,
check_index_freq=False,
- check_series_type=False):
+ check_series_type=False,
+ check_less_precise=False):
if check_series_type:
assert(type(left) == type(right))
- assert_almost_equal(left.values, right.values)
+ assert_almost_equal(left.values, right.values, check_less_precise)
if check_dtype:
assert(left.dtype == right.dtype)
assert(left.index.equals(right.index))
@@ -160,9 +174,11 @@ def assert_series_equal(left, right, check_dtype=True,
getattr(right, 'freqstr', None))
-def assert_frame_equal(left, right, check_index_type=False,
+def assert_frame_equal(left, right, check_dtype=True,
+ check_index_type=False,
check_column_type=False,
- check_frame_type=False):
+ check_frame_type=False,
+ check_less_precise=False):
if check_frame_type:
assert(type(left) == type(right))
assert(isinstance(left, DataFrame))
@@ -175,7 +191,10 @@ def assert_frame_equal(left, right, check_index_type=False,
assert(col in right)
lcol = left.icol(i)
rcol = right.icol(i)
- assert_series_equal(lcol, rcol)
+ assert_series_equal(lcol, rcol,
+ check_dtype=check_dtype,
+ check_index_type=check_index_type,
+ check_less_precise=check_less_precise)
if check_index_type:
assert(type(left.index) == type(right.index))
@@ -187,7 +206,9 @@ def assert_frame_equal(left, right, check_index_type=False,
assert(left.columns.inferred_type == right.columns.inferred_type)
-def assert_panel_equal(left, right, check_panel_type=False):
+def assert_panel_equal(left, right,
+ check_panel_type=False,
+ check_less_precise=False):
if check_panel_type:
assert(type(left) == type(right))
@@ -197,13 +218,14 @@ def assert_panel_equal(left, right, check_panel_type=False):
for col, series in left.iterkv():
assert(col in right)
- assert_frame_equal(series, right[col])
+ assert_frame_equal(series, right[col], check_less_precise=check_less_precise)
for col in right:
assert(col in left)
-def assert_panel4d_equal(left, right):
+def assert_panel4d_equal(left, right,
+ check_less_precise=False):
assert(left.labels.equals(right.labels))
assert(left.items.equals(right.items))
assert(left.major_axis.equals(right.major_axis))
@@ -211,7 +233,7 @@ def assert_panel4d_equal(left, right):
for col, series in left.iterkv():
assert(col in right)
- assert_panel_equal(series, right[col])
+ assert_panel_equal(series, right[col], check_less_precise=check_less_precise)
for col in right:
assert(col in left)
diff --git a/vb_suite/groupby.py b/vb_suite/groupby.py
index 502752d9ec6a6..caa09c219a866 100644
--- a/vb_suite/groupby.py
+++ b/vb_suite/groupby.py
@@ -177,15 +177,24 @@ def f():
data = Series(randn(len(labels)))
data[::3] = np.nan
data[1::3] = np.nan
+data2 = Series(randn(len(labels)),dtype='float32')
+data2[::3] = np.nan
+data2[1::3] = np.nan
labels = labels.take(np.random.permutation(len(labels)))
"""
groupby_first = Benchmark('data.groupby(labels).first()', setup,
start_date=datetime(2012, 5, 1))
+groupby_first_float32 = Benchmark('data2.groupby(labels).first()', setup,
+ start_date=datetime(2013, 1, 1))
+
groupby_last = Benchmark('data.groupby(labels).last()', setup,
start_date=datetime(2012, 5, 1))
+groupby_last_float32 = Benchmark('data2.groupby(labels).last()', setup,
+ start_date=datetime(2013, 1, 1))
+
#----------------------------------------------------------------------
# groupby_indices replacement, chop up Series
diff --git a/vb_suite/reindex.py b/vb_suite/reindex.py
index 2f675636ee928..acf8f6f043bad 100644
--- a/vb_suite/reindex.py
+++ b/vb_suite/reindex.py
@@ -56,6 +56,7 @@
ts = Series(np.random.randn(len(rng)), index=rng)
ts2 = ts[::2]
ts3 = ts2.reindex(ts.index)
+ts4 = ts3.astype('float32')
def pad():
try:
@@ -81,9 +82,16 @@ def backfill():
name="reindex_fillna_pad",
start_date=datetime(2011, 3, 1))
+reindex_fillna_pad_float32 = Benchmark("ts4.fillna(method='pad')", setup,
+ name="reindex_fillna_pad_float32",
+ start_date=datetime(2013, 1, 1))
+
reindex_fillna_backfill = Benchmark("ts3.fillna(method='backfill')", setup,
name="reindex_fillna_backfill",
start_date=datetime(2011, 3, 1))
+reindex_fillna_backfill_float32 = Benchmark("ts4.fillna(method='backfill')", setup,
+ name="reindex_fillna_backfill_float32",
+ start_date=datetime(2013, 1, 1))
#----------------------------------------------------------------------
# align on level
| Support for numeric dtype propogation and coexistance in DataFrames. Prior to 0.10.2, numeric dtypes passed to DataFrames were always casted to `int64` or `float64`. Now, if a dtype is passed (either directly via the `dtype` keyword, a passed `ndarray`, or a passed `Series`, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste. This closes GH #622
other changes introduced in this PR (i removed all datetime like issues to PR # 2752 - should be merged first)
ENH:
- validated get_numeric_data returns correct dtypes
- added `blocks` attribute (and as_blocks()) method that returns a
dict of dtype -> homogeneous dtyped DataFrame, analagous to `values` attribute
- added keyword 'raise_on_error' to astype, which can be set to false to exluded non-numeric columns
- changed `get_dtype_counts()` to use `blocks` attribute
- changed `convert_objects()` to use the internals method convert (which is block operated)
- added option to `convert_numeric=False` to `convert_objects` to force numeric conversion (or set to `np.nan`, turned off by default)
- added option to `convert_dates='coerce'` to `convert_objects` to force datetimelike conversions (or set to `NaT`) for invalid values, turned off by default, returns datetime64[ns] dtype
- groupby operations to respect dtype inputs wherever possible, even if intermediate casting is required (obviously if the input are ints and nans are resulting, this is casted),
all cython functions are implemented
- auto generation of most groupby functions by type is now in `generated_code.py`
e.g. (group_add,group_mean)
- added full `float32/int16/int8` support for all numeric operations, including (`diff, backfill, pad, take`)
- added `dtype` display to show on Series as a default
BUG:
- fixed up tests from from_records to use record arrays directly
NOTE: using tuples will remove dtype info from the input stream (using a record array is ok though!)
- fixed merging to correctly merge on multiple dtypes with blocks (e.g. float64 and float32 in other merger)
- (#2623 can be fixed, but is dependent on #2752)
- fixed #2778, bug in pad/backfill with 0-len frame
- fixed very obscure bug in DataFrame.from_records with dictionary and columns passed and hash randomization is on!
- integer upcasts will now happend on `where` when using inplace ops (#2793)
TST:
- tests added for merging changes, astype, convert
- fixes for test_excel on 32-bit
- fixed test_resample_median_bug_1688
- separated out test_from_records_dictlike
- added tests for (GH #797)
- added lots of tests for`where`
DOC:
- added DataTypes section in Data Structres intro
- whatsnew examples
_It would be really helpful if some users could give this a test run before merging. I have put in test cases for numeric operations, combining with DataFrame and Series, but I am sure there are some corner cases that were missed_
```
In [17]: df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
In [18]: df1
Out[18]:
A
0 -0.007220
1 -0.236432
2 2.427172
3 -0.998639
4 -1.039410
5 0.336029
6 0.832988
7 -0.413241
In [19]: df1.dtypes
Out[19]: A float32
In [20]: df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'),
B = Series(randn(8)),
C = Series(randn(8),dtype='uint8') ))
In [22]: df2
Out[22]:
A B C
0 1.150391 -1.033296 0
1 0.123047 1.915564 0
2 0.151367 -0.489826 0
3 -0.565430 -0.734238 0
4 -0.352295 -0.451430 0
5 -0.618164 0.673102 255
6 1.554688 0.322035 0
7 0.160767 0.420718 0
In [23]: df2.dtypes
Out[23]:
A float16
B float64
C uint8
In [24]: # here you get some upcasting
In [25]: df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
In [26]: df3
Out[26]:
A B C
0 1.143170 -1.033296 0
1 -0.113385 1.915564 0
2 2.578539 -0.489826 0
3 -1.564069 -0.734238 0
4 -1.391705 -0.451430 0
5 -0.282135 0.673102 255
6 2.387676 0.322035 0
7 -0.252475 0.420718 0
In [27]: df3.dtypes
Out[27]:
A float32
B float64
C float64
```
the example from #622
```
In [23]: a = np.array(np.random.randint(10, size=1e6),dtype='int32')
In [24]: b = np.array(np.random.randint(10, size=1e6),dtype='int64')
In [25]: df = pandas.DataFrame(dict(a = a, b = b))
In [26]: df.dtypes
Out[26]:
a int32
b int64
```
Conversion examples
```
# conversion of dtypes
In [81]: df3.astype('float32').dtypes
Out[81]:
A float32
B float32
C float32
# mixed type conversions
In [82]: df3['D'] = '1.'
In [83]: df3['E'] = '1'
In [84]: df3.convert_objects(convert_numeric=True).dtypes
Out[84]:
A float32
B float64
C float64
D float64
E int64
# same, but specific dtype conversion
In [85]: df3['D'] = df3['D'].astype('float16')
In [86]: df3['E'] = df3['E'].astype('int32')
In [87]: df3.dtypes
Out[87]:
A float32
B float64
C float64
D float16
E int32
# forcing date coercion
In [18]: s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1,
....: Timestamp('20010104'), '20010105'],dtype='O')
....:
In [19]: s.convert_objects(convert_dates='coerce')
Out[19]:
0 2001-01-01 00:00:00
1 NaT
2 NaT
3 NaT
4 2001-01-04 00:00:00
5 2001-01-05 00:00:00
Dtype: datetime64[ns]
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2708 | 2013-01-19T21:27:44Z | 2013-02-10T16:19:30Z | 2013-02-10T16:19:30Z | 2014-06-17T21:41:04Z |
Added metadataframe.py which has MetaDataframe class. This class stores... | diff --git a/pandas/util/deletejunkme b/pandas/util/deletejunkme
new file mode 100644
index 0000000000000..5773bb9e06b9b
--- /dev/null
+++ b/pandas/util/deletejunkme
@@ -0,0 +1,4 @@
+,c11,c22,c33
+A,0.18529367226154594,0.6693404911820483,0.030617744747423785
+B,0.34920481481834037,1.296923884492839,0.43464074746062209
+C,0.42095744808252256,0.76952459373832505,0.097848710765341504
diff --git a/pandas/util/metadframe.py b/pandas/util/metadframe.py
new file mode 100644
index 0000000000000..a5f3bdef3cb39
--- /dev/null
+++ b/pandas/util/metadframe.py
@@ -0,0 +1,321 @@
+''' Provides composition class, MetaDataFrame, which is an ordinary python object that stores a Dataframe and
+attempts to promote attributes and methods to the instance level (eg self.x instead of self.df.x). This object
+can be subclassed and ensures persistence of custom attributes. The goal of this MetaDataFrame is to provide a
+subclassing api beyond monkey patching (which currently fails in persisting attributes upon most method returns
+and upon derialization.'''
+
+from types import MethodType
+import copy
+import functools
+import cPickle
+import collections
+
+from pandas.core.indexing import _NDFrameIndexer
+
+from pandas import DataFrame, Series
+
+## for testing
+from numpy.random import randn
+
+#----------------------------------------------------------------------
+# Store attributes/methods of dataframe for later inspection with __setattr__
+# Note: This is preferred to a storing individual instances of self._df with custom
+# attr as if user tried self.a and self._df.a existed, it would call this...
+_dfattrs=[x for x in dir(DataFrame) if '__' not in x]
+
+#----------------------------------------------------------------------
+# Loading (perhaps change name?) ... Doesn't work correctly as instance methods
+
+def mload(inname):
+ ''' Load MetaDataFrame from file'''
+ if isinstance(inname, basestring):
+ inname=open(inname, 'r')
+ return cPickle.load(inname)
+
+def mloads(string):
+ ''' Load a MetaDataFrame from string stored in memory.'''
+ return cPickle.loads(string)
+
+
+class MetaDataFrame(object):
+ ''' Base composition for subclassing dataframe.'''
+
+ def __init__(self, *dfargs, **dfkwargs):
+ ''' Stores a dataframe under reserved attribute name, self._df'''
+ self._df=DataFrame(*dfargs, **dfkwargs)
+
+ ### Save methods
+ def save(self, outname):
+ ''' Takes in str or opened file and saves. cPickle.dump wrapper.'''
+ if isinstance(outname, basestring):
+ outname=open(outname, 'w')
+ cPickle.dump(self, outname)
+
+ def dumps(self):
+ ''' Output TimeSpectra into a pickled string in memory.'''
+ return cPickle.dumps(self)
+
+ def deepcopy(self):
+ ''' Make a deepcopy of self, including the dataframe.'''
+ return copy.deepcopy(self)
+
+ def as_dataframe(self):
+ ''' Convience method to return a raw dataframe, self._df'''
+ return self._df
+
+ #----------------------------------------------------------------------
+ # Overwrite Dataframe methods and operators
+
+ def __getitem__(self, key):
+ ''' Item lookup. If output is an interable, _transfer is called.
+ Sometimes __getitem__ returns a float (indexing a series) at which
+ point we just want to return that.'''
+
+ dfout=self._df.__getitem__(key)
+
+ try:
+ iter(dfout) #Test if iterable without forcing user to have collections package.
+ except TypeError:
+ return dfout
+ else:
+ return self._transfer(self._df.__getitem__(key) )
+
+ def __setitem__(self, key, value):
+ self._df.__setitem__(key, value)
+
+ ### These tell python to ignore __getattr__ when pickling; hence, treat this like a normal class
+ def __getstate__(self): return self.__dict__
+ def __setstate__(self, d): self.__dict__.update(d)
+
+ def __getattr__(self, attr, *fcnargs, **fcnkwargs):
+ ''' Tells python how to handle all attributes that are not found. Basic attributes
+ are directly referenced to self._df; however, instance methods (like df.corr() ) are
+ handled specially using a special private parsing method, _dfgetattr().'''
+
+ ### Return basic attribute
+
+ try:
+ refout=getattr(self._df, attr)
+ except AttributeError:
+ raise AttributeError('Could not find attribute "%s" in %s or its underlying DataFrame'%(attr, self.__class__.__name__))
+
+ if not isinstance(refout, MethodType):
+ return refout
+
+ ### Handle instance methods using _dfgetattr().
+ ### see http://stackoverflow.com/questions/3434938/python-allowing-methods-not-specifically-defined-to-be-called-ala-getattr
+ else:
+ return functools.partial(self._dfgetattr, attr, *fcnargs, **fcnkwargs)
+ ### This is a reference to the fuction (aka a wrapper) not the function itself
+
+ def __setattr__(self, name, value):
+ ''' When user sets an attribute, this tries to intercept any name conflicts. For example, if user attempts to set
+ self.columns=50, this will actually try self._df.columns=50, which throws an error. The behavior is acheived by
+ using dir() on the data frame created upon initialization, filtering __x__ type methods. Not guaranteed to work 100%
+ of the time due to implicit possible issues with dir() and inspection in Python. Best practice is for users to avoid name
+ conflicts when possible.'''
+
+ super(MetaDataFrame, self).__setattr__(name, value)
+ if name in _dfattrs:
+ setattr(self._df, name, value)
+ else:
+ self.__dict__[name]=value
+
+
+ def _transfer(self, dfnew):
+ ''' Copies all attribtues into a new object except has to store current dataframe
+ in memory as this can't be copied correctly using copy.deepcopy. Probably a quicker way...
+
+ dfnew is used if one wants to pass a new dataframe in. This is used primarily in calls from __getattr__.'''
+ ### Store old value of df and remove current df to copy operation will take
+ olddf=self._df.copy() #Removed deep=True because series return could not implement it
+ self._df=None
+
+ ### Create new object and apply new df
+ newobj=copy.deepcopy(self) #This looks like None, but is it type (MetaDataFrame, just __union__ prints None
+ newobj._df=dfnew
+
+ ### Restore old value of df and return new object
+ self._df=olddf
+ return newobj
+
+
+ def _dfgetattr(self, attr, *fcnargs, **fcnkwargs):
+ ''' Called by __getattr__ as a wrapper, this private method is used to ensure that any
+ DataFrame method that returns a new DataFrame will actually return a TimeSpectra object
+ instead. It does so by typechecking the return of attr().
+
+ **kwargs: use_base - If true, program attempts to call attribute on the baseline. Baseline ought
+ to be maintained as a series, and Series/Dataframe API's must be same.
+
+ *fcnargs and **fcnkwargs are passed to the dataframe method.
+
+ Note: tried to ad an as_new keyword to do this operation in place, but doing self=dfout instead of return dfout
+ didn't work. Could try to add this at the __getattr__ level; however, may not be worth it.'''
+
+ out=getattr(self._df, attr)(*fcnargs, **fcnkwargs)
+
+ ### If operation returns a dataframe, return new TimeSpectra
+ if isinstance(out, DataFrame):
+ dfout=self._transfer(out)
+ return dfout
+
+ ### Otherwise return whatever the method return would be
+ else:
+ return out
+
+ def __repr__(self):
+ return self._df.__repr__()
+
+ ### Operator overloading ####
+ ### In place operations need to overwrite self._df
+ def __add__(self, x):
+ return self._transfer(self._df.__add__(x))
+
+ def __sub__(self, x):
+ return self._transfer(self._df.__sub__(x))
+
+ def __mul__(self, x):
+ return self._transfer(self._df.__mul__(x))
+
+ def __div__(self, x):
+ return self._transfer(self._df.__div__(x))
+
+ def __truediv__(self, x):
+ return self._transfer(self._df.__truediv__(x))
+
+ ### From what I can tell, __pos__(), __abs__() builtin to df, just __neg__()
+ def __neg__(self):
+ return self._transfer(self._df.__neg__() )
+
+ ### Object comparison operators
+ def __lt__(self, x):
+ return self._transfer(self._df.__lt__(x))
+
+ def __le__(self, x):
+ return self._transfer(self._df.__le__(x))
+
+ def __eq__(self, x):
+ return self._transfer(self._df.__eq__(x))
+
+ def __ne__(self, x):
+ return self._transfer(self._df.__ne__(x))
+
+ def __ge__(self, x):
+ return self._transfer(self._df.__ge__(x))
+
+ def __gt__(self, x):
+ return self._transfer(self._df.__gt__(x))
+
+ def __len__(self):
+ return self._df.__len__()
+
+ def __nonzero__(self):
+ return self._df.__nonzero__()
+
+ def __contains__(self, x):
+ return self._df.__contains__(x)
+
+ def __iter__(self):
+ return self._df.__iter__()
+
+
+ ## Fancy indexing
+ _ix=None
+
+ @property
+ def ix(self, *args, **kwargs):
+ ''' Pandas Indexing. Note, this has been modified to ensure that series returns (eg ix[3])
+ still maintain attributes. To remove this behavior, replace the following:
+
+ self._ix = _MetaIndexer(self, _NDFrameIndexer(self) ) --> self._ix=_NDFrameIndexer(self)
+
+ The above works because slicing preserved attributes because the _NDFrameIndexer is a python object
+ subclass.'''
+ if self._ix is None:
+ self._ix=_MetaIndexer(self)
+ return self._ix
+
+class _MetaIndexer(_NDFrameIndexer):
+ ''' Intercepts the slicing of ix so Series returns can be handled properly. In addition,
+ it makes sure that the new index is assigned properly.
+
+ Notes:
+ -----
+ Under the hood pandas called upon _NDFrameIndexer methods, so this merely overwrites the
+ ___getitem__() method and leaves all the rest intact'''
+
+ def __getitem__(self, key):
+ out=super(_MetaIndexer, self).__getitem__(key)
+
+ ### Series returns transformed to MetaDataFrame
+ if isinstance(out, Series):
+ df=DataFrame(out)
+ return self.obj._transfer(out)
+
+ ### Make sure the new object's index property is syched to its ._df index.
+ else:
+ return out
+
+
+
+class SubFoo(MetaDataFrame):
+ ''' Shows an example of how to subclass MetaDataFrame with custom attributes, a and b.'''
+
+ def __init__(self, a, b, *dfargs, **dfkwargs):
+ self.a = a
+ self.b = b
+
+ super(SubFoo, self).__init__(*dfargs, **dfkwargs)
+
+ def __repr__(self):
+ return "Hi I'm SubFoo. I'm not really a DataFrame, but I quack like one."
+
+ @property
+ def data(self):
+ ''' Return underyling dataframe attribute self._df'''
+ return self._data
+
+
+#### TESTING ###
+if __name__ == '__main__':
+
+ ### Create a MetaDataFrame
+ meta_df=MetaDataFrame(abs(randn(3,3)), index=['A','B','C'], columns=['c11','c22', 'c33'])
+
+ meta_df.to_csv('deletejunkme')
+
+ ### Add some new attributes
+ meta_df.a=50
+ meta_df.b='Pamela'
+ print 'See the original metadataframe\n'
+ print meta_df
+ print '\nI can operate on it (+ - / *) and call dataframe methods like rank()'
+
+ meta_df.ix[0]
+
+ ### Perform some intrinsic DF operations
+ new=meta_df*50.0
+ new=new.rank()
+ print '\nSee modified dataframe:\n'
+ print new
+
+ ### Verify attribute persistence
+ print '\nAttributes a = %s and b = %s will persist when new metadataframes are returned.'%(new.a, new.b)
+
+ ### Demonstrate subclassing by invoking SubFoo class
+ print '\nI can subclass a dataframe an overwrite its __repr__() or more carefully __bytes__()/__unicode__() method(s)\n'
+ subclass=SubFoo(50, 200, abs(randn(3,3)), index=['A','B','C'], columns=['c11','c22', 'c33'])
+ print subclass
+ ### Access underlying dataframe
+ print '\nMy underlying dataframe is stored in the "data" attribute.\n'
+ print subclass.data
+
+ ### Pickle
+ print '\nSave me by using x.save() / x.dumps() and load using mload(x) / mloads(x).'
+# df.save('outpath')
+# f=open('outpath', 'r')
+# df2=load(f)
+
+
+
| ... a dataframe and attempts to overload all operators and promote dataframe methods to object methods in hopes that users can subclass this effectively as if it were a Dataframe itself.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2695 | 2013-01-14T22:13:51Z | 2013-03-14T02:29:17Z | null | 2014-06-12T07:28:04Z |
DOC: correct default merge suffixes: _x,_y | diff --git a/doc/source/merging.rst b/doc/source/merging.rst
index 464679c0e5b01..b80947695bebd 100644
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -311,7 +311,7 @@ standard database join operations between DataFrame objects:
merge(left, right, how='left', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=True,
- suffixes=('.x', '.y'), copy=True)
+ suffixes=('_x', '_y'), copy=True)
Here's a description of what each argument is for:
@@ -336,7 +336,7 @@ Here's a description of what each argument is for:
order. Defaults to ``True``, setting to ``False`` will improve performance
substantially in many cases
- ``suffixes``: A tuple of string suffixes to apply to overlapping
- columns. Defaults to ``('.x', '.y')``.
+ columns. Defaults to ``('_x', '_y')``.
- ``copy``: Always copy data (default ``True``) from the passed DataFrame
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
| Fix small typo in merge documentation: change `suffixes=('.x', '.y')` to `suffixes=('_x', '_y')` to match current implementation.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2688 | 2013-01-13T23:40:38Z | 2013-01-19T23:25:05Z | null | 2014-07-07T14:09:13Z |
BUG: series assignment with a boolean indexer | diff --git a/RELEASE.rst b/RELEASE.rst
index 59a86221d14a9..3db8f5702e7bf 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -55,6 +55,7 @@ pandas 0.10.1
- Add ``logx`` option to DataFrame/Series.plot (GH2327_, #2565)
- Support reading gzipped data from file-like object
- ``pivot_table`` aggfunc can be anything used in GroupBy.aggregate (GH2643_)
+ - Add methods ``neg`` and ``inv`` to Series
**Bug fixes**
@@ -78,6 +79,7 @@ pandas 0.10.1
- Exclude non-numeric data from DataFrame.quantile by default (GH2625_)
- Fix a Cython C int64 boxing issue causing read_csv to return incorrect
results (GH2599_)
+ - Fix setitem on a Series with a boolean key and a non-scalar as value (GH2686_)
**API Changes**
@@ -98,6 +100,7 @@ pandas 0.10.1
.. _GH2625: https://github.com/pydata/pandas/issues/2625
.. _GH2643: https://github.com/pydata/pandas/issues/2643
.. _GH2637: https://github.com/pydata/pandas/issues/2637
+.. _GH2686: https://github.com/pydata/pandas/issues/2686
pandas 0.10.0
=============
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 56b6e06844b86..b2703b96e611b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -604,6 +604,9 @@ def where(self, cond, other=nan, inplace=False):
if len(cond) != len(self):
raise ValueError('condition must have same length as series')
+ if cond.dtype != np.bool_:
+ cond = cond.astype(np.bool_)
+
ser = self if inplace else self.copy()
if not isinstance(other, (list, tuple, np.ndarray)):
ser._set_with(~cond, other)
@@ -661,9 +664,9 @@ def __setitem__(self, key, value):
if _is_bool_indexer(key):
key = self._check_bool_indexer(key)
- key = np.asarray(key, dtype=bool)
-
- self._set_with(key, value)
+ self.where(~key,value,inplace=True)
+ else:
+ self._set_with(key, value)
def _set_with(self, key, value):
# other: fancy integer or otherwise
@@ -695,7 +698,7 @@ def _set_with(self, key, value):
else:
return self._set_values(key, value)
elif key_type == 'boolean':
- self._set_values(key, value)
+ self._set_values(key, value)
else:
self._set_labels(key, value)
@@ -740,6 +743,12 @@ def _check_bool_indexer(self, key):
raise ValueError('cannot index with vector containing '
'NA / NaN values')
+ # coerce to bool type
+ if not hasattr(result, 'shape'):
+ result = np.array(result)
+ if result.dtype != np.bool_:
+ result = result.astype(np.bool_)
+
return result
def __setslice__(self, i, j, value):
@@ -1097,6 +1106,15 @@ def iteritems(self):
__le__ = _comp_method(operator.le, '__le__')
__eq__ = _comp_method(operator.eq, '__eq__')
__ne__ = _comp_method(operator.ne, '__ne__')
+
+ # inversion
+ def __neg__(self):
+ arr = operator.neg(self.values)
+ return Series(arr, self.index, name=self.name)
+
+ def __invert__(self):
+ arr = operator.inv(self.values)
+ return Series(arr, self.index, name=self.name)
# binary logic
__or__ = _bool_method(operator.or_, '__or__')
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 335d747e960fb..c21c7588b643e 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1044,6 +1044,22 @@ def test_ix_setitem(self):
self.assertEquals(self.series[d1], 4)
self.assertEquals(self.series[d2], 6)
+ def test_setitem_boolean(self):
+ mask = self.series > self.series.median()
+
+ # similiar indexed series
+ result = self.series.copy()
+ result[mask] = self.series*2
+ expected = self.series*2
+ assert_series_equal(result[mask], expected[mask])
+
+ # needs alignment
+ result = self.series.copy()
+ result[mask] = (self.series*2)[0:5]
+ expected = (self.series*2)[0:5].reindex_like(self.series)
+ expected[-mask] = self.series[mask]
+ assert_series_equal(result[mask], expected[mask])
+
def test_ix_setitem_boolean(self):
mask = self.series > self.series.median()
@@ -1517,6 +1533,12 @@ def check(series, other):
check(self.ts, self.ts[::2])
check(self.ts, 5)
+ def test_neg(self):
+ assert_series_equal(-self.series, -1 * self.series)
+
+ def test_invert(self):
+ assert_series_equal(-(self.series < 0), ~(self.series < 0))
+
def test_operators(self):
def _check_op(series, other, op, pos_only=False):
@@ -3211,9 +3233,9 @@ def test_interpolate_index_values(self):
expected = s.copy()
bad = isnull(expected.values)
good = -bad
- expected[bad] = np.interp(vals[bad], vals[good], s.values[good])
+ expected = Series(np.interp(vals[bad], vals[good], s.values[good]), index=s.index[bad])
- assert_series_equal(result, expected)
+ assert_series_equal(result[bad], expected)
def test_weekday(self):
# Just run the function
| - fixed assignment with a boolean indexer and a non-scalar as the value (see below)
- 2nd commit fixed the alignment issue; that is if the value is not a full series, align first (using where machinery)
- add neg and inv methods
- _check_bool_indexer now coerces to bool if needed (otherwise weird effects when doing neg/inv on boolean series)
```
In [1]: import numpy as np
In [2]: import pandas as pd
In [3]: x = pd.Series(index=np.arange(10))
In [4]: y = pd.Series(np.random.rand(10),index=np.arange(10))
In [5]: y
Out[5]:
0 0.197830
1 0.134425
2 0.766618
3 0.057765
4 0.790927
5 0.153636
6 0.585305
7 0.906787
8 0.647155
9 0.494154
In [6]: x[y>0.5] = y
In [7]: x
Out[7]:
0 NaN
1 NaN
2 0.197830
3 NaN
4 0.134425
5 NaN
6 0.766618
7 0.057765
8 0.790927
9 NaN
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2686 | 2013-01-11T16:44:55Z | 2013-01-20T03:28:34Z | null | 2014-06-19T13:19:40Z |
TST: fix test to rely on display.encoding rather then utf8, GH2666 | diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 4f77da10ac257..cb92f85cad0aa 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -137,7 +137,7 @@ def test_to_string_repr_unicode(self):
line_len = len(rs[0])
for line in rs[1:]:
try:
- line = line.decode('utf-8')
+ line = line.decode(get_option("display.encoding"))
except:
pass
self.assert_(len(line) == line_len)
| closes #2666
| https://api.github.com/repos/pandas-dev/pandas/pulls/2677 | 2013-01-10T14:48:53Z | 2013-01-10T23:24:19Z | 2013-01-10T23:24:19Z | 2014-06-30T00:12:03Z |
BUG: HDFStore fixes | diff --git a/RELEASE.rst b/RELEASE.rst
index 59a86221d14a9..245e72d6bca6e 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -52,6 +52,7 @@ pandas 0.10.1
- added method ``unique`` to select the unique values in an indexable or data column
- added method ``copy`` to copy an existing store (and possibly upgrade)
- show the shape of the data on disk for non-table stores when printing the store
+ - added ability to read PyTables flavor tables (allows compatiblity to other HDF5 systems)
- Add ``logx`` option to DataFrame/Series.plot (GH2327_, #2565)
- Support reading gzipped data from file-like object
- ``pivot_table`` aggfunc can be anything used in GroupBy.aggregate (GH2643_)
@@ -66,6 +67,8 @@ pandas 0.10.1
- handle correctly ``Term`` passed types (e.g. ``index<1000``, when index
is ``Int64``), (closes GH512_)
- handle Timestamp correctly in data_columns (closes GH2637_)
+ - contains correctly matches on non-natural names
+ - correctly store ``float32`` dtypes in tables (if not other float types in the same table)
- Fix DataFrame.info bug with UTF8-encoded columns. (GH2576_)
- Fix DatetimeIndex handling of FixedOffset tz (GH2604_)
- More robust detection of being in IPython session for wide DataFrame
@@ -86,6 +89,7 @@ pandas 0.10.1
- refactored HFDStore to deal with non-table stores as objects, will allow future enhancements
- removed keyword ``compression`` from ``put`` (replaced by keyword
``complib`` to be consistent across library)
+ - warn `PerformanceWarning` if you are attempting to store types that will be pickled by PyTables
.. _GH512: https://github.com/pydata/pandas/issues/512
.. _GH1277: https://github.com/pydata/pandas/issues/1277
@@ -98,6 +102,7 @@ pandas 0.10.1
.. _GH2625: https://github.com/pydata/pandas/issues/2625
.. _GH2643: https://github.com/pydata/pandas/issues/2643
.. _GH2637: https://github.com/pydata/pandas/issues/2637
+.. _GH2694: https://github.com/pydata/pandas/issues/2694
pandas 0.10.0
=============
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 1b61de7bf8281..6b7ec3dfdd841 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1211,7 +1211,7 @@ You can create/modify an index for a table with ``create_table_index`` after dat
Query via Data Columns
~~~~~~~~~~~~~~~~~~~~~~
-You can designate (and index) certain columns that you want to be able to perform queries (other than the `indexable` columns, which you can always query). For instance say you want to perform this common operation, on-disk, and return just the frame that matches this query.
+You can designate (and index) certain columns that you want to be able to perform queries (other than the `indexable` columns, which you can always query). For instance say you want to perform this common operation, on-disk, and return just the frame that matches this query. You can specify ``data_columns = True`` to force all columns to be data_columns
.. ipython:: python
@@ -1260,7 +1260,7 @@ To retrieve the *unique* values of an indexable or data column, use the method `
concat([ store.select('df_dc',c) for c in [ crit1, crit2 ] ])
-**Table Object**
+**Storer Object**
If you want to inspect the stored object, retrieve via ``get_storer``. You could use this progamatically to say get the number of rows in an object.
@@ -1363,17 +1363,40 @@ Notes & Caveats
# we have provided a minimum minor_axis indexable size
store.root.wp_big_strings.table
-Compatibility
-~~~~~~~~~~~~~
+External Compatibility
+~~~~~~~~~~~~~~~~~~~~~~
+
+``HDFStore`` write storer objects in specific formats suitable for producing loss-less roundtrips to pandas objects. For external compatibility, ``HDFStore`` can read native ``PyTables`` format tables. It is possible to write an ``HDFStore`` object that can easily be imported into ``R`` using the ``rhdf5`` library. Create a table format store like this:
+
+ .. ipython:: python
+
+ store_export = HDFStore('export.h5')
+ store_export.append('df_dc',df_dc,data_columns=df_dc.columns)
+ store_export
+
+ .. ipython:: python
+ :suppress:
+
+ store_export.close()
+ import os
+ os.remove('export.h5')
+
+Backwards Compatibility
+~~~~~~~~~~~~~~~~~~~~~~~
0.10.1 of ``HDFStore`` is backwards compatible for reading tables created in a prior version of pandas however, query terms using the prior (undocumented) methodology are unsupported. ``HDFStore`` will issue a warning if you try to use a prior-version format file. You must read in the entire file and write it out using the new format, using the method ``copy`` to take advantage of the updates. The group attribute ``pandas_version`` contains the version information. ``copy`` takes a number of options, please see the docstring.
+ .. ipython:: python
+ :suppress:
+
+ import os
+ legacy_file_path = os.path.abspath('source/_static/legacy_0.10.h5')
+
.. ipython:: python
# a legacy store
- import os
- legacy_store = HDFStore('legacy_0.10.h5', 'r')
+ legacy_store = HDFStore(legacy_file_path,'r')
legacy_store
# copy (and return the new handle)
@@ -1397,6 +1420,7 @@ Performance
- You can pass ``chunksize=an integer`` to ``append``, to change the writing chunksize (default is 50000). This will signficantly lower your memory usage on writing.
- You can pass ``expectedrows=an integer`` to the first ``append``, to set the TOTAL number of expectedrows that ``PyTables`` will expected. This will optimize read/write performance.
- Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
+ - A ``PerformanceWarning`` will be raised if you are attempting to store types that will be pickled by PyTables (rather than stored as endemic types). See <http://stackoverflow.com/questions/14355151/how-to-make-pandas-hdfstore-put-operation-faster/14370190#14370190> for more information and some solutions.
Experimental
~~~~~~~~~~~~
diff --git a/doc/source/v0.10.1.txt b/doc/source/v0.10.1.txt
index 2eb40b2823214..8aa2dad2b35a0 100644
--- a/doc/source/v0.10.1.txt
+++ b/doc/source/v0.10.1.txt
@@ -119,12 +119,15 @@ Multi-table creation via ``append_to_multiple`` and selection via ``select_as_mu
**Enhancements**
+- ``HDFStore`` now can read native PyTables table format tables
- You can pass ``nan_rep = 'my_nan_rep'`` to append, to change the default nan representation on disk (which converts to/from `np.nan`), this defaults to `nan`.
- You can pass ``index`` to ``append``. This defaults to ``True``. This will automagically create indicies on the *indexables* and *data columns* of the table
- You can pass ``chunksize=an integer`` to ``append``, to change the writing chunksize (default is 50000). This will signficantly lower your memory usage on writing.
- You can pass ``expectedrows=an integer`` to the first ``append``, to set the TOTAL number of expectedrows that ``PyTables`` will expected. This will optimize read/write performance.
- ``Select`` now supports passing ``start`` and ``stop`` to provide selection space limiting in selection.
+**Bug Fixes**
+- ``HDFStore`` tables can now store ``float32`` types correctly (cannot be mixed with ``float64`` however)
See the `full release notes
<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 571bcf5008178..40c4dc6e5efe7 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -835,4 +835,4 @@ def block2d_to_blocknd(values, items, shape, labels, ref_items=None):
def factor_indexer(shape, labels):
""" given a tuple of shape and a list of Factor lables, return the expanded label indexer """
mult = np.array(shape)[::-1].cumprod()[::-1]
- return np.sum(np.array(labels).T * np.append(mult, [1]), axis=1).T
+ return com._ensure_platform_int(np.sum(np.array(labels).T * np.append(mult, [1]), axis=1).T)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index b7cdf1706b5e9..78bd204f26993 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -24,6 +24,7 @@
from pandas.core.common import _asarray_tuplesafe, _try_sort
from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.reshape import block2d_to_block3d, block2d_to_blocknd, factor_indexer
+from pandas.core.index import Int64Index
import pandas.core.common as com
from pandas.tools.merge import concat
@@ -41,6 +42,11 @@ class IncompatibilityWarning(Warning): pass
where criteria is being ignored as this version [%s] is too old (or not-defined),
read the file in and write it out to a new file to upgrade (with the copy_to method)
"""
+class PerformanceWarning(Warning): pass
+performance_doc = """
+your performance may suffer as PyTables swill pickle object types that it cannot map
+directly to c-types [inferred_type->%s,key->%s]
+"""
# map object types
_TYPE_MAP = {
@@ -71,6 +77,7 @@ class IncompatibilityWarning(Warning): pass
# table class map
_TABLE_MAP = {
+ 'generic_table' : 'GenericTable',
'appendable_frame' : 'AppendableFrameTable',
'appendable_multiframe' : 'AppendableMultiFrameTable',
'appendable_panel' : 'AppendablePanelTable',
@@ -220,7 +227,7 @@ def __contains__(self, key):
node = self.get_node(key)
if node is not None:
name = node._v_pathname
- return re.search(key, name) is not None
+ if name == key or name[1:] == key: return True
return False
def __len__(self):
@@ -508,7 +515,7 @@ def append(self, key, value, columns=None, **kwargs):
Optional Parameters
-------------------
- data_columns : list of columns to create as data columns
+ data_columns : list of columns to create as data columns, or True to use all columns
min_itemsize : dict of columns that specify minimum string sizes
nan_rep : string to use as string nan represenation
chunksize : size to chunk the writing
@@ -609,7 +616,8 @@ def create_table_index(self, key, **kwargs):
def groups(self):
""" return a list of all the top-level nodes (that are not themselves a pandas storage object) """
- return [ g for g in self.handle.walkGroups() if getattr(g._v_attrs,'pandas_type',None) ]
+ _tables()
+ return [ g for g in self.handle.walkNodes() if getattr(g._v_attrs,'pandas_type',None) or getattr(g,'table',None) or (isinstance(g,_table_mod.table.Table) and g._v_name != 'table') ]
def get_node(self, key):
""" return the node with the key or None if it does not exist """
@@ -684,16 +692,23 @@ def error(t):
# infer the pt from the passed value
if pt is None:
if value is None:
- raise Exception("cannot create a storer if the object is not existing nor a value are passed")
- try:
- pt = _TYPE_MAP[type(value)]
- except:
- error('_TYPE_MAP')
+ _tables()
+ if getattr(group,'table',None) or isinstance(group,_table_mod.table.Table):
+ pt = 'frame_table'
+ tt = 'generic_table'
+ else:
+ raise Exception("cannot create a storer if the object is not existing nor a value are passed")
+ else:
+
+ try:
+ pt = _TYPE_MAP[type(value)]
+ except:
+ error('_TYPE_MAP')
- # we are actually a table
- if table or append:
- pt += '_table'
+ # we are actually a table
+ if table or append:
+ pt += '_table'
# a storer node
if 'table' not in pt:
@@ -959,6 +974,24 @@ def set_attr(self):
""" set the kind for this colummn """
setattr(self.attrs, self.kind_attr, self.kind)
+class GenericIndexCol(IndexCol):
+ """ an index which is not represented in the data of the table """
+
+ @property
+ def is_indexed(self):
+ return False
+
+ def convert(self, values, nan_rep):
+ """ set the values from this selection: take = take ownership """
+
+ self.values = Int64Index(np.arange(self.table.nrows))
+ return self
+
+ def get_attr(self):
+ pass
+
+ def set_attr(self):
+ pass
class DataCol(IndexCol):
""" a data holding column, by definition this is not indexable
@@ -1096,7 +1129,7 @@ def get_atom_data(self, block):
def set_atom_data(self, block):
self.kind = block.dtype.name
self.typ = self.get_atom_data(block)
- self.set_data(block.values.astype(self.typ._deftype))
+ self.set_data(block.values.astype(self.typ.type))
def get_atom_datetime64(self, block):
return _tables().Int64Col(shape=block.shape[0])
@@ -1194,6 +1227,12 @@ def get_atom_data(self, block):
def get_atom_datetime64(self, block):
return _tables().Int64Col()
+class GenericDataIndexableCol(DataIndexableCol):
+ """ represent a generic pytables data column """
+
+ def get_attr(self):
+ pass
+
class Storer(object):
""" represent an object in my store
facilitate read/write of various types of objects
@@ -1238,6 +1277,8 @@ def __repr__(self):
self.infer_axes()
s = self.shape
if s is not None:
+ if isinstance(s, (list,tuple)):
+ s = "[%s]" % ','.join([ str(x) for x in s ])
return "%-12.12s (shape->%s)" % (self.pandas_type,s)
return self.pandas_type
@@ -1570,6 +1611,17 @@ def write_array(self, key, value):
return
if value.dtype.type == np.object_:
+
+ # infer the type, warn if we have a non-string type here (for performance)
+ inferred_type = lib.infer_dtype(value.flatten())
+ if empty_array:
+ pass
+ elif inferred_type == 'string':
+ pass
+ else:
+ ws = performance_doc % (inferred_type,key)
+ warnings.warn(ws, PerformanceWarning)
+
vlarr = self.handle.createVLArray(self.group, key,
_tables().ObjectAtom())
vlarr.append(value)
@@ -1618,7 +1670,7 @@ class SeriesStorer(GenericStorer):
@property
def shape(self):
try:
- return "[%s]" % len(getattr(self.group,'values',None))
+ return len(getattr(self.group,'values')),
except:
return None
@@ -1748,7 +1800,7 @@ def shape(self):
if self.is_shape_reversed:
shape = shape[::-1]
- return "[%s]" % ','.join([ str(x) for x in shape ])
+ return shape
except:
return None
@@ -1810,7 +1862,7 @@ class Table(Storer):
index_axes : a list of tuples of the (original indexing axis and index column)
non_index_axes: a list of tuples of the (original index axis and columns on a non-indexing axis)
values_axes : a list of the columns which comprise the data of this table
- data_columns : a list of the columns that we are allowing indexing (these become single columns in values_axes)
+ data_columns : a list of the columns that we are allowing indexing (these become single columns in values_axes), or True to force all columns
nan_rep : the string to use for nan representations for string objects
levels : the names of levels
@@ -1908,7 +1960,7 @@ def is_transposed(self):
@property
def data_orientation(self):
""" return a tuple of my permutated axes, non_indexable at the front """
- return tuple(itertools.chain([a[0] for a in self.non_index_axes], [a.axis for a in self.index_axes]))
+ return tuple(itertools.chain([int(a[0]) for a in self.non_index_axes], [int(a.axis) for a in self.index_axes]))
def queryables(self):
""" return a dict of the kinds allowable columns for this object """
@@ -2075,7 +2127,7 @@ def create_axes(self, axes, obj, validate=True, nan_rep=None, data_columns=None,
validate: validate the obj against an existiing object already written
min_itemsize: a dict of the min size for a column in bytes
nan_rep : a values to use for string column nan_rep
- data_columns : a list of columns that we want to create separate to allow indexing
+ data_columns : a list of columns that we want to create separate to allow indexing (or True will force all colummns)
"""
@@ -2109,12 +2161,6 @@ def create_axes(self, axes, obj, validate=True, nan_rep=None, data_columns=None,
nan_rep = 'nan'
self.nan_rep = nan_rep
- # convert the objects if we can to better divine dtypes
- try:
- obj = obj.convert_objects()
- except:
- pass
-
# create axes to index and non_index
index_axes_map = dict()
for i, a in enumerate(obj.axes):
@@ -2160,6 +2206,9 @@ def create_axes(self, axes, obj, validate=True, nan_rep=None, data_columns=None,
if data_columns is not None and len(self.non_index_axes):
axis = self.non_index_axes[0][0]
axis_labels = self.non_index_axes[0][1]
+ if data_columns is True:
+ data_columns = axis_labels
+
data_columns = [c for c in data_columns if c in axis_labels]
if len(data_columns):
blocks = block_obj.reindex_axis(Index(axis_labels) - Index(
@@ -2202,7 +2251,7 @@ def create_axes(self, axes, obj, validate=True, nan_rep=None, data_columns=None,
except (NotImplementedError):
raise
except (Exception), detail:
- raise Exception("cannot find the correct atom type -> [dtype->%s] %s" % (b.dtype.name, str(detail)))
+ raise Exception("cannot find the correct atom type -> [dtype->%s,items->%s] %s" % (b.dtype.name, b.items, str(detail)))
j += 1
# validate the axes if we have an existing table
@@ -2517,8 +2566,6 @@ def write_data_chunk(self, indexes, mask, search, values):
self.table.append(rows)
self.table.flush()
except (Exception), detail:
- import pdb
- pdb.set_trace()
raise Exception(
"tables cannot write this data -> %s" % str(detail))
@@ -2630,6 +2677,51 @@ def read(self, where=None, columns=None, **kwargs):
return df
+class GenericTable(AppendableFrameTable):
+ """ a table that read/writes the generic pytables table format """
+ pandas_kind = 'frame_table'
+ table_type = 'generic_table'
+ ndim = 2
+ obj_type = DataFrame
+
+ @property
+ def pandas_type(self):
+ return self.pandas_kind
+
+ @property
+ def storable(self):
+ return getattr(self.group,'table',None) or self.group
+
+ def get_attrs(self):
+ """ retrieve our attributes """
+ self.non_index_axes = []
+ self.nan_rep = None
+ self.levels = []
+ t = self.table
+ self.index_axes = [ a.infer(t) for a in self.indexables if a.is_an_indexable ]
+ self.values_axes = [ a.infer(t) for a in self.indexables if not a.is_an_indexable ]
+ self.data_columns = [ a.name for a in self.values_axes ]
+
+ @property
+ def indexables(self):
+ """ create the indexables from the table description """
+ if self._indexables is None:
+
+ d = self.description
+
+ # the index columns is just a simple index
+ self._indexables = [ GenericIndexCol(name='index',axis=0) ]
+
+ for i, n in enumerate(d._v_names):
+
+ dc = GenericDataIndexableCol(name = n, pos=i, values = [ n ], version = self.version)
+ self._indexables.append(dc)
+
+ return self._indexables
+
+ def write(self, **kwargs):
+ raise NotImplementedError("cannot write on an generic table")
+
class AppendableMultiFrameTable(AppendableFrameTable):
""" a frame with a multi-index """
table_type = 'appendable_multiframe'
@@ -2643,6 +2735,8 @@ def table_type_short(self):
def write(self, obj, data_columns=None, **kwargs):
if data_columns is None:
data_columns = []
+ elif data_columns is True:
+ data_columns = obj.columns[:]
for n in obj.index.names:
if n not in data_columns:
data_columns.insert(0, n)
diff --git a/pandas/io/tests/pytables_native.h5 b/pandas/io/tests/pytables_native.h5
new file mode 100644
index 0000000000000..a01b0f1dca3c0
Binary files /dev/null and b/pandas/io/tests/pytables_native.h5 differ
diff --git a/pandas/io/tests/pytables_native2.h5 b/pandas/io/tests/pytables_native2.h5
new file mode 100644
index 0000000000000..4786eea077533
Binary files /dev/null and b/pandas/io/tests/pytables_native2.h5 differ
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index cb2d9dd2af58f..5e0fe8d292e16 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -9,7 +9,7 @@
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, Index)
-from pandas.io.pytables import HDFStore, get_store, Term, IncompatibilityWarning
+from pandas.io.pytables import HDFStore, get_store, Term, IncompatibilityWarning, PerformanceWarning
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
@@ -111,6 +111,12 @@ def test_contains(self):
self.assert_('/foo/b' not in self.store)
self.assert_('bar' not in self.store)
+ # GH 2694
+ warnings.filterwarnings('ignore', category=tables.NaturalNameWarning)
+ self.store['node())'] = tm.makeDataFrame()
+ self.assert_('node())' in self.store)
+ warnings.filterwarnings('always', category=tables.NaturalNameWarning)
+
def test_versioning(self):
self.store['a'] = tm.makeTimeSeries()
self.store['b'] = tm.makeDataFrame()
@@ -254,6 +260,28 @@ def test_put_integer(self):
df = DataFrame(np.random.randn(50, 100))
self._check_roundtrip(df, tm.assert_frame_equal)
+ def test_put_mixed_type(self):
+ df = tm.makeTimeDataFrame()
+ df['obj1'] = 'foo'
+ df['obj2'] = 'bar'
+ df['bool1'] = df['A'] > 0
+ df['bool2'] = df['B'] > 0
+ df['bool3'] = True
+ df['int1'] = 1
+ df['int2'] = 2
+ df['timestamp1'] = Timestamp('20010102')
+ df['timestamp2'] = Timestamp('20010103')
+ df['datetime1'] = datetime.datetime(2001, 1, 2, 0, 0)
+ df['datetime2'] = datetime.datetime(2001, 1, 3, 0, 0)
+ df.ix[3:6, ['obj1']] = np.nan
+ df = df.consolidate().convert_objects()
+ self.store.remove('df')
+ warnings.filterwarnings('ignore', category=PerformanceWarning)
+ self.store.put('df',df)
+ expected = self.store.get('df')
+ tm.assert_frame_equal(expected,df)
+ warnings.filterwarnings('always', category=PerformanceWarning)
+
def test_append(self):
df = tm.makeTimeDataFrame()
@@ -697,7 +725,7 @@ def test_big_table_frame(self):
print "\nbig_table frame [%s] -> %5.2f" % (rows, time.time() - x)
def test_big_table2_frame(self):
- # this is a really big table: 2.5m rows x 300 float columns, 20 string
+ # this is a really big table: 1m rows x 60 float columns, 20 string, 20 datetime
# columns
raise nose.SkipTest('no big table2 frame')
@@ -705,10 +733,12 @@ def test_big_table2_frame(self):
print "\nbig_table2 start"
import time
start_time = time.time()
- df = DataFrame(np.random.randn(2.5 * 1000 * 1000, 300), index=range(int(
- 2.5 * 1000 * 1000)), columns=['E%03d' % i for i in xrange(300)])
- for x in range(20):
+ df = DataFrame(np.random.randn(1000 * 1000, 60), index=xrange(int(
+ 1000 * 1000)), columns=['E%03d' % i for i in xrange(60)])
+ for x in xrange(20):
df['String%03d' % x] = 'string%03d' % x
+ for x in xrange(20):
+ df['datetime%03d' % x] = datetime.datetime(2001, 1, 2, 0, 0)
print "\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f" % (len(df.index), time.time() - start_time)
fn = 'big_table2.h5'
@@ -722,7 +752,7 @@ def f(chunksize):
store.close()
return r
- for c in [10000, 50000, 100000, 250000]:
+ for c in [10000, 50000, 250000]:
start_time = time.time()
print "big_table2 frame [chunk->%s]" % c
rows = f(c)
@@ -731,6 +761,35 @@ def f(chunksize):
finally:
os.remove(fn)
+ def test_big_put_frame(self):
+ raise nose.SkipTest('no big put frame')
+
+ print "\nbig_put start"
+ import time
+ start_time = time.time()
+ df = DataFrame(np.random.randn(1000 * 1000, 60), index=xrange(int(
+ 1000 * 1000)), columns=['E%03d' % i for i in xrange(60)])
+ for x in xrange(20):
+ df['String%03d' % x] = 'string%03d' % x
+ for x in xrange(20):
+ df['datetime%03d' % x] = datetime.datetime(2001, 1, 2, 0, 0)
+
+ print "\nbig_put frame (creation of df) [rows->%s] -> %5.2f" % (len(df.index), time.time() - start_time)
+ fn = 'big_put.h5'
+
+ try:
+
+ start_time = time.time()
+ store = HDFStore(fn, mode='w')
+ store.put('df', df)
+ store.close()
+
+ print df.get_dtype_counts()
+ print "big_put frame [shape->%s] -> %5.2f" % (df.shape, time.time() - start_time)
+
+ finally:
+ os.remove(fn)
+
def test_big_table_panel(self):
raise nose.SkipTest('no big table panel')
@@ -748,7 +807,7 @@ def test_big_table_panel(self):
x = time.time()
try:
store = HDFStore(self.scratchpath)
- store.prof_append('wp', wp)
+ store.append('wp', wp)
rows = store.root.wp.table.nrows
recons = store.select('wp')
finally:
@@ -817,15 +876,35 @@ def test_table_index_incompatible_dtypes(self):
def test_table_values_dtypes_roundtrip(self):
df1 = DataFrame({'a': [1, 2, 3]}, dtype='f8')
- self.store.append('df1', df1)
- assert df1.dtypes == self.store['df1'].dtypes
+ self.store.append('df_f8', df1)
+ assert df1.dtypes == self.store['df_f8'].dtypes
df2 = DataFrame({'a': [1, 2, 3]}, dtype='i8')
- self.store.append('df2', df2)
- assert df2.dtypes == self.store['df2'].dtypes
+ self.store.append('df_i8', df2)
+ assert df2.dtypes == self.store['df_i8'].dtypes
# incompatible dtype
- self.assertRaises(Exception, self.store.append, 'df2', df1)
+ self.assertRaises(Exception, self.store.append, 'df_i8', df1)
+
+ # check creation/storage/retrieval of float32 (a bit hacky to actually create them thought)
+ df1 = DataFrame(np.array([[1],[2],[3]],dtype='f4'),columns = ['A'])
+ self.store.append('df_f4', df1)
+ assert df1.dtypes == self.store['df_f4'].dtypes
+ assert df1.dtypes[0] == 'float32'
+
+ # check with mixed dtypes (but not multi float types)
+ df1 = DataFrame(np.array([[1],[2],[3]],dtype='f4'),columns = ['float32'])
+ df1['string'] = 'foo'
+ self.store.append('df_mixed_dtypes1', df1)
+ assert (df1.dtypes == self.store['df_mixed_dtypes1'].dtypes).all() == True
+ assert df1.dtypes[0] == 'float32'
+ assert df1.dtypes[1] == 'object'
+
+ ### this is not supported, e.g. mixed float32/float64 blocks ###
+ #df1 = DataFrame(np.array([[1],[2],[3]],dtype='f4'),columns = ['float32'])
+ #df1['float64'] = 1.0
+ #self.store.append('df_mixed_dtypes2', df1)
+ #assert df1.dtypes == self.store['df_mixed_dtypes2'].dtypes).all() == True
def test_table_mixed_dtypes(self):
@@ -1159,15 +1238,19 @@ def test_tuple_index(self):
idx = [(0., 1.), (2., 3.), (4., 5.)]
data = np.random.randn(30).reshape((3, 10))
DF = DataFrame(data, index=idx, columns=col)
+ warnings.filterwarnings('ignore', category=PerformanceWarning)
self._check_roundtrip(DF, tm.assert_frame_equal)
+ warnings.filterwarnings('always', category=PerformanceWarning)
def test_index_types(self):
values = np.random.randn(2)
func = lambda l, r: tm.assert_series_equal(l, r, True, True, True)
+ warnings.filterwarnings('ignore', category=PerformanceWarning)
ser = Series(values, [0, 'y'])
self._check_roundtrip(ser, func)
+ warnings.filterwarnings('always', category=PerformanceWarning)
ser = Series(values, [datetime.datetime.today(), 0])
self._check_roundtrip(ser, func)
@@ -1175,11 +1258,15 @@ def test_index_types(self):
ser = Series(values, ['y', 0])
self._check_roundtrip(ser, func)
+ warnings.filterwarnings('ignore', category=PerformanceWarning)
ser = Series(values, [datetime.date.today(), 'a'])
self._check_roundtrip(ser, func)
+ warnings.filterwarnings('always', category=PerformanceWarning)
+ warnings.filterwarnings('ignore', category=PerformanceWarning)
ser = Series(values, [1.23, 'b'])
self._check_roundtrip(ser, func)
+ warnings.filterwarnings('always', category=PerformanceWarning)
ser = Series(values, [1, 1.53])
self._check_roundtrip(ser, func)
@@ -1450,6 +1537,13 @@ def test_select(self):
expected = df[df.A > 0].reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
+ # all a data columns
+ self.store.remove('df')
+ self.store.append('df', df, data_columns=True)
+ result = self.store.select('df', ['A > 0'], columns=['A', 'B'])
+ expected = df[df.A > 0].reindex(columns=['A', 'B'])
+ tm.assert_frame_equal(expected, result)
+
# with a data column, but different columns
self.store.remove('df')
self.store.append('df', df, data_columns=['A'])
@@ -1739,6 +1833,16 @@ def _check_roundtrip_table(self, obj, comparator, compression=False):
store.close()
os.remove(self.scratchpath)
+ def test_pytables_native_read(self):
+ pth = curpath()
+ store = HDFStore(os.path.join(pth, 'pytables_native.h5'), 'r')
+ d2 = store['detector/readout']
+ store.close()
+ store = HDFStore(os.path.join(pth, 'pytables_native2.h5'), 'r')
+ str(store)
+ d1 = store['detector']
+ store.close()
+
def test_legacy_read(self):
pth = curpath()
store = HDFStore(os.path.join(pth, 'legacy.h5'), 'r')
@@ -1760,7 +1864,6 @@ def test_legacy_table_read(self):
store.select('df2', typ='legacy_frame')
# old version warning
- import warnings
warnings.filterwarnings('ignore', category=IncompatibilityWarning)
self.assertRaises(
Exception, store.select, 'wp1', Term('minor_axis', '=', 'B'))
@@ -1812,7 +1915,7 @@ def do_copy(f = None, new_f = None, keys = None, propindexes = True, **kwargs):
if a.is_indexed:
self.assert_(new_t[a.name].is_indexed == True)
- except:
+ except (Exception), detail:
pass
finally:
store.close()
@@ -1899,9 +2002,11 @@ def test_tseries_indices_frame(self):
def test_unicode_index(self):
unicode_values = [u'\u03c3', u'\u03c3\u03c3']
-
+ warnings.filterwarnings('ignore', category=PerformanceWarning)
s = Series(np.random.randn(len(unicode_values)), unicode_values)
self._check_roundtrip(s, tm.assert_series_equal)
+ warnings.filterwarnings('always', category=PerformanceWarning)
+
def test_store_datetime_mixed(self):
df = DataFrame(
| - shape attribute on GenericStorer returns tuple correctly now (not string)
- data_orientation was potentitally returning non-ints in a tuple (fixed downstream), fixes GH #2671, though fixed downstream anyhow
- fix for contains with a non-natural name (GH #2694 )
- added ability to read generic PyTables flavor tables
- added PerformanceWarning to trigger on putting of non-endemic types (only on Storers)
- correctly store float32 dtypes (when not-mixed with float64, otherwise converted)
- can close GH #1985 as well
| https://api.github.com/repos/pandas-dev/pandas/pulls/2675 | 2013-01-10T12:37:24Z | 2013-01-20T03:04:37Z | 2013-01-20T03:04:37Z | 2014-06-17T13:14:00Z |
DOC: Added missing escape for a backslash | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 1cbdfaadbcb33..1b61de7bf8281 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1,4 +1,3 @@
-
.. _io:
.. currentmodule:: pandas
@@ -43,7 +42,7 @@ data into a DataFrame object. They can take a number of arguments:
- ``sep`` or ``delimiter``: A delimiter / separator to split fields
on. `read_csv` is capable of inferring the delimiter automatically in some
cases by "sniffing." The separator may be specified as a regular
- expression; for instance you may use '\|\s*' to indicate a pipe plus
+ expression; for instance you may use '\|\\s*' to indicate a pipe plus
arbitrary whitespace.
- ``delim_whitespace``: Parse whitespace-delimited (spaces or tabs) file
(much faster than using a regular expression)
| This seems to be the way the RegEx was meant
| https://api.github.com/repos/pandas-dev/pandas/pulls/2674 | 2013-01-10T10:51:18Z | 2013-01-10T14:44:19Z | 2013-01-10T14:44:19Z | 2014-07-05T04:29:14Z |
DOC: Fix grammar typo in dsintro, "a series is alike" -> "a series is like" | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 5cd5c60bbeb29..362ef8ef7d7fb 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -129,7 +129,7 @@ We will address array-based indexing in a separate :ref:`section <indexing>`.
Series is dict-like
~~~~~~~~~~~~~~~~~~~
-A Series is alike a fixed-size dict in that you can get and set values by index
+A Series is like a fixed-size dict in that you can get and set values by index
label:
.. ipython :: python
| Fixed a minor typo.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2673 | 2013-01-10T05:38:14Z | 2013-01-10T14:44:53Z | 2013-01-10T14:44:53Z | 2014-07-07T20:35:40Z |
add support for microsecond periods (GH2145) | diff --git a/pandas/src/datetime.pxd b/pandas/src/datetime.pxd
index 1a977aab48514..9ae116b06c3b2 100644
--- a/pandas/src/datetime.pxd
+++ b/pandas/src/datetime.pxd
@@ -1,7 +1,5 @@
-from numpy cimport int64_t, int32_t, npy_int64, npy_int32, ndarray
-from cpython cimport PyObject
-
-from cpython cimport PyUnicode_Check, PyUnicode_AsASCIIString
+from numpy cimport int64_t as i8, int32_t as i4
+from cpython cimport PyObject, PyUnicode_Check, PyUnicode_AsASCIIString
cdef extern from "headers/stdint.h":
@@ -9,7 +7,6 @@ cdef extern from "headers/stdint.h":
enum: INT32_MIN
-
cdef extern from "datetime.h":
ctypedef class datetime.date [object PyDateTime_Date]:
@@ -45,8 +42,8 @@ cdef extern from "datetime_helper.h":
cdef extern from "numpy/ndarrayobject.h":
- ctypedef int64_t npy_timedelta
- ctypedef int64_t npy_datetime
+ ctypedef i8 npy_timedelta
+ ctypedef i8 npy_datetime
ctypedef enum NPY_CASTING:
NPY_NO_CASTING
@@ -82,8 +79,7 @@ cdef extern from "datetime/np_datetime.h":
PANDAS_FR_as
ctypedef struct pandas_datetimestruct:
- npy_int64 year
- npy_int32 month, day, hour, min, sec, us, ps, as
+ i8 year, month, day, hour, min, sec, us, ps, as
int convert_pydatetime_to_datetimestruct(PyObject *obj,
pandas_datetimestruct *out,
@@ -98,7 +94,7 @@ cdef extern from "datetime/np_datetime.h":
int days_per_month_table[2][12]
int dayofweek(int y, int m, int d)
- int is_leapyear(int64_t year)
+ int is_leapyear(i8 year)
PANDAS_DATETIMEUNIT get_datetime64_unit(object o)
cdef extern from "datetime/np_datetime_strings.h":
@@ -145,7 +141,7 @@ cdef inline int _cstring_to_dts(char *val, int length,
return result
-cdef inline object _datetime64_to_datetime(int64_t val):
+cdef inline object _datetime64_to_datetime(i8 val):
cdef pandas_datetimestruct dts
pandas_datetime_to_datetimestruct(val, PANDAS_FR_ns, &dts)
return _dts_to_pydatetime(&dts)
@@ -155,7 +151,7 @@ cdef inline object _dts_to_pydatetime(pandas_datetimestruct *dts):
dts.day, dts.hour,
dts.min, dts.sec, dts.us)
-cdef inline int64_t _pydatetime_to_dts(object val, pandas_datetimestruct *dts):
+cdef inline i8 _pydatetime_to_dts(object val, pandas_datetimestruct *dts):
dts.year = PyDateTime_GET_YEAR(val)
dts.month = PyDateTime_GET_MONTH(val)
dts.day = PyDateTime_GET_DAY(val)
@@ -166,8 +162,7 @@ cdef inline int64_t _pydatetime_to_dts(object val, pandas_datetimestruct *dts):
dts.ps = dts.as = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-cdef inline int64_t _dtlike_to_datetime64(object val,
- pandas_datetimestruct *dts):
+cdef inline i8 _dtlike_to_datetime64(object val, pandas_datetimestruct *dts):
dts.year = val.year
dts.month = val.month
dts.day = val.day
@@ -178,12 +173,10 @@ cdef inline int64_t _dtlike_to_datetime64(object val,
dts.ps = dts.as = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-cdef inline int64_t _date_to_datetime64(object val,
- pandas_datetimestruct *dts):
+cdef inline i8 _date_to_datetime64(object val, pandas_datetimestruct *dts):
dts.year = PyDateTime_GET_YEAR(val)
dts.month = PyDateTime_GET_MONTH(val)
dts.day = PyDateTime_GET_DAY(val)
dts.hour = dts.min = dts.sec = dts.us = 0
dts.ps = dts.as = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-
diff --git a/pandas/src/period.c b/pandas/src/period.c
index 4e7ab44c7b150..72819a54d34da 100644
--- a/pandas/src/period.c
+++ b/pandas/src/period.c
@@ -13,62 +13,68 @@
* Code derived from scikits.timeseries
* ------------------------------------------------------------------*/
+#include <stdbool.h>
-static int mod_compat(int x, int m) {
- int result = x % m;
- if (result < 0) return result + m;
- return result;
+static i8
+mod_compat(i8 x, i8 m)
+{
+ i8 result = x % m;
+
+ if (result < 0)
+ result += m;
+
+ return result;
}
-static int floordiv(int x, int divisor) {
- if (x < 0) {
- if (mod_compat(x, divisor)) {
- return x / divisor - 1;
- }
- else return x / divisor;
- } else {
- return x / divisor;
- }
+static i8
+floordiv(i8 x, i8 divisor)
+{
+ i8 x_div_d = x / divisor;
+
+ if (x < 0 && mod_compat(x, divisor))
+ --x_div_d;
+
+ return x_div_d;
}
static asfreq_info NULL_AF_INFO;
/* Table with day offsets for each month (0-based, without and with leap) */
-static int month_offset[2][13] = {
+static i8 month_offset[2][13] = {
{ 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
{ 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
};
/* Table of number of days in a month (0-based, without and with leap) */
-static int days_in_month[2][12] = {
+static i8 days_in_month[2][12] = {
{ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 },
{ 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }
};
/* Return 1/0 iff year points to a leap year in calendar. */
-static int dInfoCalc_Leapyear(npy_int64 year, int calendar)
+static bool
+dInfoCalc_Leapyear(i8 year, enum CalendarType calendar)
{
- if (calendar == GREGORIAN_CALENDAR) {
- return (year % 4 == 0) && ((year % 100 != 0) || (year % 400 == 0));
- } else {
- return (year % 4 == 0);
- }
+ bool ymod4_is0 = year % 4 == 0;
+
+ if (calendar == GREGORIAN)
+ return ymod4_is0 && (year % 100 != 0 || year % 400 == 0);
+
+ return ymod4_is0;
}
/* Return the day of the week for the given absolute date. */
-static int dInfoCalc_DayOfWeek(npy_int64 absdate)
+static i8
+dInfoCalc_DayOfWeek(i8 absdate)
{
- int day_of_week;
-
- if (absdate >= 1) {
- day_of_week = (absdate - 1) % 7;
- } else {
- day_of_week = 6 - ((-absdate) % 7);
- }
- return day_of_week;
+ return absdate >= 1 ? (absdate - 1) % 7 : 6 - (-absdate % 7);
}
-static int monthToQuarter(int month) { return ((month-1)/3)+1; }
+static i8
+monthToQuarter(i8 month)
+{
+ return (month - 1) / 3 + 1;
+}
/* Return the year offset, that is the absolute date of the day
31.12.(year-1) in the given calendar.
@@ -78,66 +84,74 @@ static int monthToQuarter(int month) { return ((month-1)/3)+1; }
using the Gregorian Epoch) value by two days because the Epoch
(0001-01-01) in the Julian calendar lies 2 days before the Epoch in
the Gregorian calendar. */
-static int dInfoCalc_YearOffset(npy_int64 year, int calendar)
+static i8
+dInfoCalc_YearOffset(i8 yr, enum CalendarType calendar)
{
- year--;
- if (calendar == GREGORIAN_CALENDAR) {
- if (year >= 0 || -1/4 == -1)
- return year*365 + year/4 - year/100 + year/400;
- else
- return year*365 + (year-3)/4 - (year-99)/100 + (year-399)/400;
- }
- else if (calendar == JULIAN_CALENDAR) {
- if (year >= 0 || -1/4 == -1)
- return year*365 + year/4 - 2;
+ i8 year = yr;
+ --year;
+
+ if (calendar == GREGORIAN)
+ if (year >= 0 || -1 / 4 == -1)
+ return year * 365 + year / 4 - year / 100 + year / 400;
+ else
+ return year * 365 + (year - 3) / 4 - (year - 99) / 100 +
+ (year - 399) / 400;
+ else if (calendar == JULIAN)
+ if (year >= 0 || -1 / 4 == -1)
+ return year * 365 + year / 4 - 2;
+ else
+ return year * 365 + (year - 3) / 4 - 2;
else
- return year*365 + (year-3)/4 - 2;
- }
- Py_Error(PyExc_ValueError, "unknown calendar");
- onError:
+ Py_Error(PyExc_ValueError, "unknown calendar");
+onError:
return INT_ERR_CODE;
}
/* Set the instance's value using the given date and time. calendar may be set
- * to the flags: GREGORIAN_CALENDAR, JULIAN_CALENDAR to indicate the calendar
+ * to the flags: GREGORIAN, JULIAN to indicate the calendar
* to be used. */
-static int dInfoCalc_SetFromDateAndTime(struct date_info *dinfo,
- int year, int month, int day, int hour, int minute, double second,
- int calendar)
+static i8
+dInfoCalc_SetFromDateAndTime(date_info *dinfo, i8 year, i8 month, i8 day,
+ i8 hour, i8 minute, i8 second, i8 microsecond,
+ enum CalendarType calendar)
{
/* Calculate the absolute date */
{
- int leap;
- npy_int64 absdate;
- int yearoffset;
+ i8 leap, absdate, yearoffset;
/* Range check */
Py_AssertWithArg(year > -(INT_MAX / 366) && year < (INT_MAX / 366),
- PyExc_ValueError,
- "year out of range: %i",
- year);
+ PyExc_ValueError,
+ "year out of range: %li",
+ year);
/* Is it a leap year ? */
leap = dInfoCalc_Leapyear(year, calendar);
/* Negative month values indicate months relative to the years end */
- if (month < 0) month += 13;
+ if (month < 0)
+ month += 13;
+
Py_AssertWithArg(month >= 1 && month <= 12,
- PyExc_ValueError,
- "month out of range (1-12): %i",
- month);
+ PyExc_ValueError,
+ "month out of range (1-12): %li",
+ month);
/* Negative values indicate days relative to the months end */
- if (day < 0) day += days_in_month[leap][month - 1] + 1;
+ if (day < 0)
+ day += days_in_month[leap][month - 1] + 1;
+
Py_AssertWithArg(day >= 1 && day <= days_in_month[leap][month - 1],
- PyExc_ValueError,
- "day out of range: %i",
- day);
+ PyExc_ValueError,
+ "day out of range: %li",
+ day);
yearoffset = dInfoCalc_YearOffset(year, calendar);
- if (PyErr_Occurred()) goto onError;
+
+ if (PyErr_Occurred())
+ goto onError;
absdate = day + month_offset[leap][month - 1] + yearoffset;
@@ -145,11 +159,11 @@ static int dInfoCalc_SetFromDateAndTime(struct date_info *dinfo,
dinfo->year = year;
dinfo->month = month;
- dinfo->quarter = ((month-1)/3)+1;
+ dinfo->quarter = (month - 1) / 3 + 1;
dinfo->day = day;
dinfo->day_of_week = dInfoCalc_DayOfWeek(absdate);
- dinfo->day_of_year = (short)(absdate - yearoffset);
+ dinfo->day_of_year = absdate - yearoffset;
dinfo->calendar = calendar;
}
@@ -157,60 +171,74 @@ static int dInfoCalc_SetFromDateAndTime(struct date_info *dinfo,
/* Calculate the absolute time */
{
Py_AssertWithArg(hour >= 0 && hour <= 23,
- PyExc_ValueError,
- "hour out of range (0-23): %i",
- hour);
+ PyExc_ValueError,
+ "hour out of range (0-23): %li",
+ hour);
Py_AssertWithArg(minute >= 0 && minute <= 59,
- PyExc_ValueError,
- "minute out of range (0-59): %i",
- minute);
- Py_AssertWithArg(second >= (double)0.0 &&
- (second < (double)60.0 ||
- (hour == 23 && minute == 59 &&
- second < (double)61.0)),
- PyExc_ValueError,
- "second out of range (0.0 - <60.0; <61.0 for 23:59): %f",
- second);
-
- dinfo->abstime = (double)(hour*3600 + minute*60) + second;
+ PyExc_ValueError,
+ "minute out of range (0-59): %li",
+ minute);
+ Py_AssertWithArg(second >= 0 &&
+ (second < 60L || (hour == 23 && minute == 59 &&
+ second < 61)),
+ PyExc_ValueError,
+ "second out of range (0 - <60L; <61 for 23:59): %li",
+ second);
+ Py_AssertWithArg(microsecond >= 0 && (microsecond < 1000000 ||
+ (hour == 23 && minute == 59 &&
+ second < 61 &&
+ microsecond < 1000001)),
+ PyExc_ValueError,
+ "microsecond out of range (0 - <100000; <100001 for 23:59:59): %li",
+ microsecond);
+
+ dinfo->abstime = hour * US_PER_HOUR + minute * US_PER_MINUTE +
+ second * US_PER_SECOND + microsecond;
dinfo->hour = hour;
dinfo->minute = minute;
dinfo->second = second;
+ dinfo->microsecond = microsecond;
}
+
return 0;
- onError:
+onError:
return INT_ERR_CODE;
}
/* Sets the date part of the date_info struct using the indicated
calendar.
- XXX This could also be done using some integer arithmetics rather
- than with this iterative approach... */
-static
-int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
- npy_int64 absdate, int calendar)
+ XXX This could also be done using some i8eger arithmetics rather
+ than with this iterative approach... */
+static i8
+dInfoCalc_SetFromAbsDate(register date_info *dinfo, i8 absdate,
+ enum CalendarType calendar)
{
- register npy_int64 year;
- npy_int64 yearoffset;
- int leap,dayoffset;
- int *monthoffset;
+ register i8 year;
+ i8 yearoffset, leap, dayoffset, *monthoffset = NULL;
+
/* Approximate year */
- if (calendar == GREGORIAN_CALENDAR) {
- year = (npy_int64)(((double)absdate) / 365.2425);
- } else if (calendar == JULIAN_CALENDAR) {
- year = (npy_int64)(((double)absdate) / 365.25);
- } else {
+ switch (calendar) {
+ case GREGORIAN:
+ year = absdate / 365.2425;
+ break;
+ case JULIAN:
+ year = absdate / 365.25;
+ break;
+ default:
Py_Error(PyExc_ValueError, "unknown calendar");
+ break;
}
- if (absdate > 0) year++;
+ if (absdate > 0)
+ ++year;
/* Apply corrections to reach the correct year */
while (1) {
+
/* Calculate the year offset */
yearoffset = dInfoCalc_YearOffset(year, calendar);
if (PyErr_Occurred())
@@ -219,7 +247,7 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
/* Backward correction: absdate must be greater than the
yearoffset */
if (yearoffset >= absdate) {
- year--;
+ --year;
continue;
}
@@ -228,9 +256,10 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
/* Forward correction: non leap years only have 365 days */
if (dayoffset > 365 && !leap) {
- year++;
+ ++year;
continue;
}
+
break;
}
@@ -240,12 +269,11 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
/* Now iterate to find the month */
monthoffset = month_offset[leap];
{
- register int month;
+ register i8 month;
- for (month = 1; month < 13; month++) {
+ for (month = 1; month < 13; ++month)
if (monthoffset[month] >= dayoffset)
- break;
- }
+ break;
dinfo->month = month;
dinfo->quarter = monthToQuarter(month);
@@ -259,71 +287,88 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
return 0;
- onError:
+onError:
return INT_ERR_CODE;
}
///////////////////////////////////////////////
// frequency specifc conversion routines
-// each function must take an integer fromDate and
+// each function must take an i8eger fromDate and
// a char relation ('S' or 'E' for 'START' or 'END')
///////////////////////////////////////////////////////////////////////
// helpers for frequency conversion routines //
-static npy_int64 DtoB_weekday(npy_int64 absdate) {
- return (((absdate) / 7) * 5) + (absdate) % 7 - BDAY_OFFSET;
+static i8
+DtoB_weekday(i8 absdate)
+{
+ return (absdate / 7) * 5 + absdate % 7 - BDAY_OFFSET;
}
-static npy_int64 DtoB_WeekendToMonday(npy_int64 absdate, int day_of_week) {
- if (day_of_week > 4) {
- //change to Monday after weekend
- absdate += (7 - day_of_week);
- }
+static i8
+DtoB_WeekendToMonday(i8 absdate, i8 day_of_week)
+{
+ if (day_of_week > 4)
+ // change to Monday after weekend
+ absdate += 7 - day_of_week;
+
return DtoB_weekday(absdate);
}
-static npy_int64 DtoB_WeekendToFriday(npy_int64 absdate, int day_of_week) {
- if (day_of_week > 4) {
- //change to friday before weekend
- absdate -= (day_of_week - 4);
- }
+static i8
+DtoB_WeekendToFriday(i8 absdate, i8 day_of_week)
+{
+ if (day_of_week > 4)
+ // change to friday before weekend
+ absdate -= day_of_week - 4;
+
return DtoB_weekday(absdate);
}
-static npy_int64 absdate_from_ymd(int y, int m, int d) {
- struct date_info tempDate;
- if (dInfoCalc_SetFromDateAndTime(&tempDate, y, m, d, 0, 0, 0, GREGORIAN_CALENDAR)) {
+static i8
+absdate_from_ymd(i8 y, i8 m, i8 d)
+{
+ date_info td;
+
+ if (dInfoCalc_SetFromDateAndTime(&td, y, m, d, 0, 0, 0, 0, GREGORIAN))
return INT_ERR_CODE;
- }
- return tempDate.absdate;
+
+ return td.absdate;
}
//************ FROM DAILY ***************
-static npy_int64 asfreq_DtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_DtoA(i8 ordinal, const char *relation, asfreq_info *af_info)
+{
+ date_info dinfo;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
- if (dinfo.month > af_info->to_a_year_end) {
- return (npy_int64)(dinfo.year + 1 - BASE_YEAR);
- }
- else {
- return (npy_int64)(dinfo.year - BASE_YEAR);
- }
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
+ return INT_ERR_CODE;
+
+ if (dinfo.month > af_info->to_a_year_end)
+ return dinfo.year + 1 - BASE_YEAR;
+ else
+ return dinfo.year - BASE_YEAR;
}
-static npy_int64 DtoQ_yq(npy_int64 ordinal, asfreq_info *af_info,
- int *year, int *quarter) {
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+static i8
+DtoQ_yq(i8 ordinal, asfreq_info *af_info, i8 *year, i8 *quarter)
+{
+ date_info dinfo;
+
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
+ return INT_ERR_CODE;
+
if (af_info->to_q_year_end != 12) {
dinfo.month -= af_info->to_q_year_end;
- if (dinfo.month <= 0) { dinfo.month += 12; }
- else { dinfo.year += 1; }
+
+ if (dinfo.month <= 0)
+ dinfo.month += 12;
+ else
+ ++dinfo.year;
+
dinfo.quarter = monthToQuarter(dinfo.month);
}
@@ -334,636 +379,1122 @@ static npy_int64 DtoQ_yq(npy_int64 ordinal, asfreq_info *af_info,
}
-static npy_int64 asfreq_DtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info) {
-
- int year, quarter;
+static i8
+asfreq_DtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 year, quarter;
- if (DtoQ_yq(ordinal, af_info, &year, &quarter) == INT_ERR_CODE) {
+ if (DtoQ_yq(ordinal, af_info, &year, &quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
- }
- return (npy_int64)((year - BASE_YEAR) * 4 + quarter - 1);
+ return (year - BASE_YEAR) * 4 + quarter - 1;
}
-static npy_int64 asfreq_DtoM(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_DtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN_CALENDAR))
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
return INT_ERR_CODE;
- return (npy_int64)((dinfo.year - BASE_YEAR) * 12 + dinfo.month - 1);
+
+ return (dinfo.year - BASE_YEAR) * 12 + dinfo.month - 1;
}
-static npy_int64 asfreq_DtoW(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return (ordinal + ORD_OFFSET - (1 + af_info->to_week_end))/7 + 1 - WEEK_OFFSET;
+static i8
+asfreq_DtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return (ordinal + ORD_OFFSET - (1 + af_info->to_week_end)) / 7
+ + 1 - WEEK_OFFSET;
}
-static npy_int64 asfreq_DtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_DtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
+ return INT_ERR_CODE;
- if (relation == 'S') {
+ if (!strcmp(relation, "S"))
return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week);
- } else {
+ else
return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week);
- }
}
// needed for getDateInfo function
-static npy_int64 asfreq_DtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) { return ordinal; }
+static i8
+asfreq_DtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal;
+}
-static npy_int64 asfreq_DtoHIGHFREQ(npy_int64 ordinal, char relation, npy_int64 per_day) {
- if (relation == 'S') {
- return ordinal * per_day;
- }
- else {
- return (ordinal+ 1) * per_day - 1;
- }
+static i8
+asfreq_DtoHIGHFREQ(i8 ordinal, const char* relation, i8 per_day)
+{
+ if (!strcmp(relation, "S"))
+ return ordinal * per_day;
+ else
+ return (ordinal + 1) * per_day - 1;
+}
+
+static i8
+asfreq_StoHIGHFREQ(i8 ordinal, const char* relation, i8 per_sec)
+{
+ if (!strcmp(relation, "S"))
+ return ordinal * per_sec;
+ else
+ return (ordinal + 1) * per_sec - 1;
+}
+
+static i8
+asfreq_DtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, 24L);
+}
+
+static i8
+asfreq_DtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, 24 * 60L);
+}
+
+static i8
+asfreq_DtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, 24 * 60L * 60L);
+}
+
+static i8
+asfreq_DtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, US_PER_DAY);
}
-static npy_int64 asfreq_DtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoHIGHFREQ(ordinal, relation, 24); }
-static npy_int64 asfreq_DtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoHIGHFREQ(ordinal, relation, 24*60); }
-static npy_int64 asfreq_DtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoHIGHFREQ(ordinal, relation, 24*60*60); }
//************ FROM SECONDLY ***************
-static npy_int64 asfreq_StoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return (ordinal)/(60*60*24); }
+static i8
+asfreq_StoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / (60L * 60L * 24L);
+}
-static npy_int64 asfreq_StoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_StoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_StoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_StoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_StoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_StoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_StoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_StoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_StoB(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoB(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_StoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_StoT(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_StoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
return ordinal / 60;
}
-static npy_int64 asfreq_StoH(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return ordinal / (60*60);
+static i8
+asfreq_StoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / 3600;
}
//************ FROM MINUTELY ***************
-static npy_int64 asfreq_TtoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return (ordinal)/(60*24); }
-
-static npy_int64 asfreq_TtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_TtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_TtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_TtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_TtoB(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoB(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-
-static npy_int64 asfreq_TtoH(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return ordinal / 60;
+static i8
+asfreq_TtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / (60 * 24);
}
-static npy_int64 asfreq_TtoS(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- if (relation == 'S') {
- return ordinal*60; }
- else {
- return ordinal*60 + 59;
- }
+static i8
+asfreq_TtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_TtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_TtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_TtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_TtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_TtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / 60;
+}
+
+static i8
+asfreq_TtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 out = ordinal * 60;
+
+ if (strcmp(relation, "S"))
+ out += 59;
+
+ return out;
}
//************ FROM HOURLY ***************
-static npy_int64 asfreq_HtoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return ordinal / 24; }
-static npy_int64 asfreq_HtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_HtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_HtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_HtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_HtoB(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoB(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_HtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / 24;
+}
+
+static i8
+asfreq_HtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_HtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_HtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_HtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_HtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
// calculation works out the same as TtoS, so we just call that function for HtoT
-static npy_int64 asfreq_HtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_TtoS(ordinal, relation, &NULL_AF_INFO); }
+static i8
+asfreq_HtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_TtoS(ordinal, relation, &NULL_AF_INFO);
+}
-static npy_int64 asfreq_HtoS(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- if (relation == 'S') {
- return ordinal*60*60;
- }
- else {
- return (ordinal + 1)*60*60 - 1;
- }
+static i8
+asfreq_HtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 ret, secs_per_hour = 3600;
+
+ if (!strcmp(relation, "S"))
+ ret = ordinal * secs_per_hour;
+ else
+ ret = (ordinal + 1) * secs_per_hour - 1;
+
+ return ret;
}
//************ FROM BUSINESS ***************
-static npy_int64 asfreq_BtoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- {
- ordinal += BDAY_OFFSET;
- return (((ordinal - 1) / 5) * 7 +
- mod_compat(ordinal - 1, 5) + 1 - ORD_OFFSET);
- }
+static i8
+asfreq_BtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 ord = ordinal;
+ ord += BDAY_OFFSET;
+ return (ord - 1) / 5 * 7 + mod_compat(ord - 1, 5) + 1 - ORD_OFFSET;
+}
-static npy_int64 asfreq_BtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_BtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_BtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_BtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_BtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_BtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_BtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_BtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_BtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_BtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
//************ FROM WEEKLY ***************
-static npy_int64 asfreq_WtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- ordinal += WEEK_OFFSET;
- if (relation == 'S') {
- return ordinal * 7 - 6 + af_info->from_week_end - ORD_OFFSET;
- }
- else {
- return ordinal * 7 + af_info->from_week_end - ORD_OFFSET;
- }
+static i8
+asfreq_WtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 k = 0, ord = ordinal;
+ ord += WEEK_OFFSET;
+ ord *= 7;
+
+ if (!strcmp(relation, "S"))
+ k = -6;
+
+ return ord + k + af_info->from_week_end - ORD_OFFSET;
+}
+
+static i8
+asfreq_WtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_WtoD(ordinal, "E", af_info), relation, af_info);
}
-static npy_int64 asfreq_WtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoA(asfreq_WtoD(ordinal, 'E', af_info), relation, af_info); }
-static npy_int64 asfreq_WtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoQ(asfreq_WtoD(ordinal, 'E', af_info), relation, af_info); }
-static npy_int64 asfreq_WtoM(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoM(asfreq_WtoD(ordinal, 'E', af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_WtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_WtoD(ordinal, "E", af_info), relation, af_info);
+}
-static npy_int64 asfreq_WtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_WtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_WtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_WtoD(ordinal, "E", af_info), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_WtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_WtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_WtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_WtoD(ordinal, relation, af_info) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+static i8
+asfreq_WtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 wtod = asfreq_WtoD(ordinal, relation, af_info) + ORD_OFFSET;
- if (relation == 'S') {
- return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week);
- }
- else {
- return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week);
- }
+ date_info dinfo;
+ if (dInfoCalc_SetFromAbsDate(&dinfo, wtod, GREGORIAN))
+ return INT_ERR_CODE;
+
+ i8 (*f)(i8 absdate, i8 day_of_week) = NULL;
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
+ return f(dinfo.absdate, dinfo.day_of_week);
+}
+
+static i8
+asfreq_WtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
}
-static npy_int64 asfreq_WtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_WtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_WtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_WtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_WtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_WtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_WtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_WtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
//************ FROM MONTHLY ***************
-static void MtoD_ym(npy_int64 ordinal, int *y, int *m) {
+static void
+MtoD_ym(i8 ordinal, i8 *y, i8 *m)
+{
*y = floordiv(ordinal, 12) + BASE_YEAR;
*m = mod_compat(ordinal, 12) + 1;
}
-static npy_int64 asfreq_MtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_MtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
- npy_int64 absdate;
- int y, m;
+ i8 absdate, y, m;
- if (relation == 'S') {
+ if (!strcmp(relation, "S")) {
MtoD_ym(ordinal, &y, &m);
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - ORD_OFFSET;
- } else {
+ }
+ else {
MtoD_ym(ordinal + 1, &y, &m);
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - 1 - ORD_OFFSET;
}
}
-static npy_int64 asfreq_MtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoA(asfreq_MtoD(ordinal, 'E', &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_MtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_MtoD(ordinal, "E", &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_MtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_MtoD(ordinal, "E", &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_MtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_MtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+
+ date_info dinfo;
+ i8 mtod = asfreq_MtoD(ordinal, relation, &NULL_AF_INFO) + ORD_OFFSET;
-static npy_int64 asfreq_MtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoQ(asfreq_MtoD(ordinal, 'E', &NULL_AF_INFO), relation, af_info); }
+ if (dInfoCalc_SetFromAbsDate(&dinfo, mtod, GREGORIAN))
+ return INT_ERR_CODE;
-static npy_int64 asfreq_MtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+ i8 (*f)(i8 absdate, i8 day_of_week);
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
-static npy_int64 asfreq_MtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+ return f(dinfo.absdate, dinfo.day_of_week);
+}
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_MtoD(ordinal, relation, &NULL_AF_INFO) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+static i8
+asfreq_MtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
- if (relation == 'S') { return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week); }
- else { return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week); }
+static i8
+asfreq_MtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
}
-static npy_int64 asfreq_MtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_MtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_MtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_MtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
//************ FROM QUARTERLY ***************
-static void QtoD_ym(npy_int64 ordinal, int *y, int *m, asfreq_info *af_info) {
+static void
+QtoD_ym(i8 ordinal, i8 *y, i8 *m, asfreq_info *af_info)
+{
*y = floordiv(ordinal, 4) + BASE_YEAR;
*m = mod_compat(ordinal, 4) * 3 + 1;
if (af_info->from_q_year_end != 12) {
*m += af_info->from_q_year_end;
- if (*m > 12) { *m -= 12; }
- else { *y -= 1; }
+
+ if (*m > 12)
+ *m -= 12;
+ else
+ --*y;
}
}
-static npy_int64 asfreq_QtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
-
- npy_int64 absdate;
- int y, m;
+static i8
+asfreq_QtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 absdate, y, m;
- if (relation == 'S') {
+ if (!strcmp(relation, "S")) {
QtoD_ym(ordinal, &y, &m, af_info);
- // printf("ordinal: %d, year: %d, month: %d\n", (int) ordinal, y, m);
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - ORD_OFFSET;
} else {
- QtoD_ym(ordinal+1, &y, &m, af_info);
- /* printf("ordinal: %d, year: %d, month: %d\n", (int) ordinal, y, m); */
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+ QtoD_ym(ordinal + 1, &y, &m, af_info);
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - 1 - ORD_OFFSET;
}
}
-static npy_int64 asfreq_QtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_QtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_QtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_QtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+static i8
+asfreq_QtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_QtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_QtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoA(asfreq_QtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_QtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_QtoM(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoM(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_QtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_QtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_QtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_QtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_QtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
+ i8 qtod = asfreq_QtoD(ordinal, relation, af_info) + ORD_OFFSET;
-static npy_int64 asfreq_QtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+ if (dInfoCalc_SetFromAbsDate(&dinfo, qtod, GREGORIAN))
+ return INT_ERR_CODE;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_QtoD(ordinal, relation, af_info) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+ i8 (*f)(i8 abdate, i8 day_of_week) = NULL;
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
- if (relation == 'S') { return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week); }
- else { return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week); }
+ return f(dinfo.absdate, dinfo.day_of_week);
}
-static npy_int64 asfreq_QtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_QtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_QtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_QtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_QtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_QtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
//************ FROM ANNUAL ***************
-static npy_int64 asfreq_AtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- npy_int64 absdate, final_adj;
- int year;
- int month = (af_info->from_a_year_end) % 12;
+static i8
+asfreq_AtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 absdate, final_adj, year;
+ i8 month = af_info->from_a_year_end % 12, ord = ordinal;
// start from 1970
- ordinal += BASE_YEAR;
+ ord += BASE_YEAR;
- if (month == 0) { month = 1; }
- else { month += 1; }
+ if (month == 0)
+ month = 1;
+ else
+ ++month;
- if (relation == 'S') {
- if (af_info->from_a_year_end == 12) {year = ordinal;}
- else {year = ordinal - 1;}
+ if (!strcmp(relation, "S")) {
+ year = af_info->from_a_year_end == 12 ? ord : ord - 1;
final_adj = 0;
} else {
- if (af_info->from_a_year_end == 12) {year = ordinal+1;}
- else {year = ordinal;}
+ if (af_info->from_a_year_end == 12)
+ year = ord + 1;
+ else
+ year = ord;
+
final_adj = -1;
}
+
absdate = absdate_from_ymd(year, month, 1);
- if (absdate == INT_ERR_CODE) {
- return INT_ERR_CODE;
- }
+
+ if (absdate == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate + final_adj - ORD_OFFSET;
}
-static npy_int64 asfreq_AtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_AtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+static i8
+asfreq_AtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+static i8
+asfreq_AtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_AtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_AtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_AtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_AtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
-static npy_int64 asfreq_AtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+ i8 atob = asfreq_AtoD(ordinal, relation, af_info) + ORD_OFFSET;
-static npy_int64 asfreq_AtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+ if (dInfoCalc_SetFromAbsDate(&dinfo, atob, GREGORIAN))
+ return INT_ERR_CODE;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_AtoD(ordinal, relation, af_info) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+ i8 (*f)(i8 date, i8 day_of_week) = NULL;
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
- if (relation == 'S') { return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week); }
- else { return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week); }
+ return f(dinfo.absdate, dinfo.day_of_week);
}
-static npy_int64 asfreq_AtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_AtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_AtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_AtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_AtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_AtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_AtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 nofunc(npy_int64 ordinal, char relation, asfreq_info *af_info) { return INT_ERR_CODE; }
-static npy_int64 no_op(npy_int64 ordinal, char relation, asfreq_info *af_info) { return ordinal; }
+static i8
+asfreq_AtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_AtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+
+static i8
+nofunc(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return INT_ERR_CODE;
+}
+
+static i8
+no_op(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal;
+}
// end of frequency specific conversion routines
-static int get_freq_group(int freq) { return (freq/1000)*1000; }
+static i8
+get_freq_group(i8 freq)
+{
+ return (freq / 1000) * 1000;
+}
-static int calc_a_year_end(int freq, int group) {
- int result = (freq - group) % 12;
- if (result == 0) {return 12;}
- else {return result;}
+static i8
+calc_a_year_end(i8 freq, i8 group)
+{
+ i8 result = (freq - group) % 12;
+ return !result ? 12 : result;
}
-static int calc_week_end(int freq, int group) {
+static i8
+calc_week_end(i8 freq, i8 group)
+{
return freq - group;
}
-void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info) {
- int fromGroup = get_freq_group(fromFreq);
- int toGroup = get_freq_group(toFreq);
+void
+get_asfreq_info(i8 fromFreq, i8 toFreq, asfreq_info *af_info)
+{
+ i8 fromGroup = get_freq_group(fromFreq);
+ i8 toGroup = get_freq_group(toFreq);
switch(fromGroup)
{
- case FR_WK: {
- af_info->from_week_end = calc_week_end(fromFreq, fromGroup);
- } break;
- case FR_ANN: {
- af_info->from_a_year_end = calc_a_year_end(fromFreq, fromGroup);
- } break;
- case FR_QTR: {
- af_info->from_q_year_end = calc_a_year_end(fromFreq, fromGroup);
- } break;
+ case FR_WK:
+ af_info->from_week_end = calc_week_end(fromFreq, fromGroup);
+ break;
+ case FR_ANN:
+ af_info->from_a_year_end = calc_a_year_end(fromFreq, fromGroup);
+ break;
+ case FR_QTR:
+ af_info->from_q_year_end = calc_a_year_end(fromFreq, fromGroup);
+ break;
}
switch(toGroup)
{
- case FR_WK: {
- af_info->to_week_end = calc_week_end(toFreq, toGroup);
- } break;
- case FR_ANN: {
- af_info->to_a_year_end = calc_a_year_end(toFreq, toGroup);
- } break;
- case FR_QTR: {
- af_info->to_q_year_end = calc_a_year_end(toFreq, toGroup);
- } break;
+ case FR_WK:
+ af_info->to_week_end = calc_week_end(toFreq, toGroup);
+ break;
+ case FR_ANN:
+ af_info->to_a_year_end = calc_a_year_end(toFreq, toGroup);
+ break;
+ case FR_QTR:
+ af_info->to_q_year_end = calc_a_year_end(toFreq, toGroup);
+ break;
}
}
-freq_conv_func get_asfreq_func(int fromFreq, int toFreq)
+static i8
+asfreq_UtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / US_PER_SECOND;
+}
+
+static i8
+asfreq_UtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_StoD(asfreq_UtoS(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_UtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO),
+ relation, &NULL_AF_INFO);
+}
+
+static i8
+asfreq_UtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+
+static i8
+asfreq_StoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_StoHIGHFREQ(ordinal, relation, US_PER_SECOND);
+}
+
+static i8
+asfreq_AtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_QtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_MtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_MtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_WtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
{
- int fromGroup = get_freq_group(fromFreq);
- int toGroup = get_freq_group(toFreq);
+ return asfreq_DtoU(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_BtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_BtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_TtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_StoU(asfreq_TtoS(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_HtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_TtoU(asfreq_HtoT(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+freq_conv_func
+get_asfreq_func(i8 fromFreq, i8 toFreq)
+{
+ i8 fromGroup = get_freq_group(fromFreq);
+ i8 toGroup = get_freq_group(toFreq);
if (fromGroup == FR_UND) { fromGroup = FR_DAY; }
switch(fromGroup)
{
- case FR_ANN:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_AtoA;
- case FR_QTR: return &asfreq_AtoQ;
- case FR_MTH: return &asfreq_AtoM;
- case FR_WK: return &asfreq_AtoW;
- case FR_BUS: return &asfreq_AtoB;
- case FR_DAY: return &asfreq_AtoD;
- case FR_HR: return &asfreq_AtoH;
- case FR_MIN: return &asfreq_AtoT;
- case FR_SEC: return &asfreq_AtoS;
- default: return &nofunc;
- }
-
- case FR_QTR:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_QtoA;
- case FR_QTR: return &asfreq_QtoQ;
- case FR_MTH: return &asfreq_QtoM;
- case FR_WK: return &asfreq_QtoW;
- case FR_BUS: return &asfreq_QtoB;
- case FR_DAY: return &asfreq_QtoD;
- case FR_HR: return &asfreq_QtoH;
- case FR_MIN: return &asfreq_QtoT;
- case FR_SEC: return &asfreq_QtoS;
- default: return &nofunc;
- }
-
- case FR_MTH:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_MtoA;
- case FR_QTR: return &asfreq_MtoQ;
- case FR_MTH: return &no_op;
- case FR_WK: return &asfreq_MtoW;
- case FR_BUS: return &asfreq_MtoB;
- case FR_DAY: return &asfreq_MtoD;
- case FR_HR: return &asfreq_MtoH;
- case FR_MIN: return &asfreq_MtoT;
- case FR_SEC: return &asfreq_MtoS;
- default: return &nofunc;
- }
-
- case FR_WK:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_WtoA;
- case FR_QTR: return &asfreq_WtoQ;
- case FR_MTH: return &asfreq_WtoM;
- case FR_WK: return &asfreq_WtoW;
- case FR_BUS: return &asfreq_WtoB;
- case FR_DAY: return &asfreq_WtoD;
- case FR_HR: return &asfreq_WtoH;
- case FR_MIN: return &asfreq_WtoT;
- case FR_SEC: return &asfreq_WtoS;
- default: return &nofunc;
- }
-
- case FR_BUS:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_BtoA;
- case FR_QTR: return &asfreq_BtoQ;
- case FR_MTH: return &asfreq_BtoM;
- case FR_WK: return &asfreq_BtoW;
- case FR_DAY: return &asfreq_BtoD;
- case FR_BUS: return &no_op;
- case FR_HR: return &asfreq_BtoH;
- case FR_MIN: return &asfreq_BtoT;
- case FR_SEC: return &asfreq_BtoS;
- default: return &nofunc;
- }
-
- case FR_DAY:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_DtoA;
- case FR_QTR: return &asfreq_DtoQ;
- case FR_MTH: return &asfreq_DtoM;
- case FR_WK: return &asfreq_DtoW;
- case FR_BUS: return &asfreq_DtoB;
- case FR_DAY: return &asfreq_DtoD;
- case FR_HR: return &asfreq_DtoH;
- case FR_MIN: return &asfreq_DtoT;
- case FR_SEC: return &asfreq_DtoS;
- default: return &nofunc;
- }
-
- case FR_HR:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_HtoA;
- case FR_QTR: return &asfreq_HtoQ;
- case FR_MTH: return &asfreq_HtoM;
- case FR_WK: return &asfreq_HtoW;
- case FR_BUS: return &asfreq_HtoB;
- case FR_DAY: return &asfreq_HtoD;
- case FR_HR: return &no_op;
- case FR_MIN: return &asfreq_HtoT;
- case FR_SEC: return &asfreq_HtoS;
- default: return &nofunc;
- }
-
- case FR_MIN:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_TtoA;
- case FR_QTR: return &asfreq_TtoQ;
- case FR_MTH: return &asfreq_TtoM;
- case FR_WK: return &asfreq_TtoW;
- case FR_BUS: return &asfreq_TtoB;
- case FR_DAY: return &asfreq_TtoD;
- case FR_HR: return &asfreq_TtoH;
- case FR_MIN: return &no_op;
- case FR_SEC: return &asfreq_TtoS;
- default: return &nofunc;
- }
-
- case FR_SEC:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_StoA;
- case FR_QTR: return &asfreq_StoQ;
- case FR_MTH: return &asfreq_StoM;
- case FR_WK: return &asfreq_StoW;
- case FR_BUS: return &asfreq_StoB;
- case FR_DAY: return &asfreq_StoD;
- case FR_HR: return &asfreq_StoH;
- case FR_MIN: return &asfreq_StoT;
- case FR_SEC: return &no_op;
- default: return &nofunc;
- }
+ case FR_ANN:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_AtoA;
+ case FR_QTR: return &asfreq_AtoQ;
+ case FR_MTH: return &asfreq_AtoM;
+ case FR_WK: return &asfreq_AtoW;
+ case FR_BUS: return &asfreq_AtoB;
+ case FR_DAY: return &asfreq_AtoD;
+ case FR_HR: return &asfreq_AtoH;
+ case FR_MIN: return &asfreq_AtoT;
+ case FR_SEC: return &asfreq_AtoS;
+ case FR_USEC: return &asfreq_AtoU;
+ default: return &nofunc;
+ }
+
+ case FR_QTR:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_QtoA;
+ case FR_QTR: return &asfreq_QtoQ;
+ case FR_MTH: return &asfreq_QtoM;
+ case FR_WK: return &asfreq_QtoW;
+ case FR_BUS: return &asfreq_QtoB;
+ case FR_DAY: return &asfreq_QtoD;
+ case FR_HR: return &asfreq_QtoH;
+ case FR_MIN: return &asfreq_QtoT;
+ case FR_SEC: return &asfreq_QtoS;
+ case FR_USEC: return &asfreq_QtoU;
default: return &nofunc;
+ }
+
+ case FR_MTH:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_MtoA;
+ case FR_QTR: return &asfreq_MtoQ;
+ case FR_MTH: return &no_op;
+ case FR_WK: return &asfreq_MtoW;
+ case FR_BUS: return &asfreq_MtoB;
+ case FR_DAY: return &asfreq_MtoD;
+ case FR_HR: return &asfreq_MtoH;
+ case FR_MIN: return &asfreq_MtoT;
+ case FR_SEC: return &asfreq_MtoS;
+ case FR_USEC: return &asfreq_MtoU;
+ default: return &nofunc;
+ }
+
+ case FR_WK:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_WtoA;
+ case FR_QTR: return &asfreq_WtoQ;
+ case FR_MTH: return &asfreq_WtoM;
+ case FR_WK: return &asfreq_WtoW;
+ case FR_BUS: return &asfreq_WtoB;
+ case FR_DAY: return &asfreq_WtoD;
+ case FR_HR: return &asfreq_WtoH;
+ case FR_MIN: return &asfreq_WtoT;
+ case FR_SEC: return &asfreq_WtoS;
+ case FR_USEC: return &asfreq_WtoU;
+ default: return &nofunc;
+ }
+
+ case FR_BUS:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_BtoA;
+ case FR_QTR: return &asfreq_BtoQ;
+ case FR_MTH: return &asfreq_BtoM;
+ case FR_WK: return &asfreq_BtoW;
+ case FR_DAY: return &asfreq_BtoD;
+ case FR_BUS: return &no_op;
+ case FR_HR: return &asfreq_BtoH;
+ case FR_MIN: return &asfreq_BtoT;
+ case FR_SEC: return &asfreq_BtoS;
+ case FR_USEC: return &asfreq_BtoU;
+ default: return &nofunc;
+ }
+
+ case FR_DAY:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_DtoA;
+ case FR_QTR: return &asfreq_DtoQ;
+ case FR_MTH: return &asfreq_DtoM;
+ case FR_WK: return &asfreq_DtoW;
+ case FR_BUS: return &asfreq_DtoB;
+ case FR_DAY: return &asfreq_DtoD;
+ case FR_HR: return &asfreq_DtoH;
+ case FR_MIN: return &asfreq_DtoT;
+ case FR_SEC: return &asfreq_DtoS;
+ case FR_USEC: return &asfreq_DtoU;
+ default: return &nofunc;
+ }
+
+ case FR_HR:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_HtoA;
+ case FR_QTR: return &asfreq_HtoQ;
+ case FR_MTH: return &asfreq_HtoM;
+ case FR_WK: return &asfreq_HtoW;
+ case FR_BUS: return &asfreq_HtoB;
+ case FR_DAY: return &asfreq_HtoD;
+ case FR_HR: return &no_op;
+ case FR_MIN: return &asfreq_HtoT;
+ case FR_SEC: return &asfreq_HtoS;
+ case FR_USEC: return &asfreq_HtoU;
+ default: return &nofunc;
+ }
+
+ case FR_MIN:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_TtoA;
+ case FR_QTR: return &asfreq_TtoQ;
+ case FR_MTH: return &asfreq_TtoM;
+ case FR_WK: return &asfreq_TtoW;
+ case FR_BUS: return &asfreq_TtoB;
+ case FR_DAY: return &asfreq_TtoD;
+ case FR_HR: return &asfreq_TtoH;
+ case FR_MIN: return &no_op;
+ case FR_SEC: return &asfreq_TtoS;
+ case FR_USEC: return &asfreq_TtoU;
+ default: return &nofunc;
+ }
+
+ case FR_SEC:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_StoA;
+ case FR_QTR: return &asfreq_StoQ;
+ case FR_MTH: return &asfreq_StoM;
+ case FR_WK: return &asfreq_StoW;
+ case FR_BUS: return &asfreq_StoB;
+ case FR_DAY: return &asfreq_StoD;
+ case FR_HR: return &asfreq_StoH;
+ case FR_MIN: return &asfreq_StoT;
+ case FR_SEC: return &no_op;
+ case FR_USEC: return &asfreq_StoU;
+ default: return &nofunc;
+ }
+
+ case FR_USEC:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_UtoA;
+ case FR_QTR: return &asfreq_UtoQ;
+ case FR_MTH: return &asfreq_UtoM;
+ case FR_WK: return &asfreq_UtoW;
+ case FR_BUS: return &asfreq_UtoB;
+ case FR_DAY: return &asfreq_UtoD;
+ case FR_HR: return &asfreq_UtoH;
+ case FR_MIN: return &asfreq_UtoT;
+ case FR_SEC: return &asfreq_UtoS;
+ case FR_USEC: return &no_op;
+ default: return &nofunc;
+ }
+
+ default: return &nofunc;
}
}
-double get_abs_time(int freq, npy_int64 daily_ord, npy_int64 ordinal) {
+static i8
+get_abs_time(i8 freq, i8 daily_ord, i8 ordinal)
+{
+
+ i8 start_ord, per_day, unit;
- npy_int64 start_ord, per_day, unit;
- switch(freq)
+ switch (freq)
{
- case FR_HR:
- per_day = 24;
- unit = 60 * 60;
- break;
- case FR_MIN:
- per_day = 24*60;
- unit = 60;
- break;
- case FR_SEC:
- per_day = 24*60*60;
- unit = 1;
- break;
- default:
- return 0; // 24*60*60 - 1;
+ case FR_HR:
+ per_day = 24L;
+ unit = US_PER_HOUR;
+ break;
+
+ case FR_MIN:
+ per_day = 24L * 60L;
+ unit = US_PER_MINUTE;
+ break;
+
+ case FR_SEC:
+ per_day = 24L * 60L * 60L;
+ unit = US_PER_SECOND;
+ break;
+
+ case FR_USEC:
+ per_day = US_PER_DAY;
+ unit = 1L;
+ break;
+
+ default:
+ return 0L;
}
- start_ord = asfreq_DtoHIGHFREQ(daily_ord, 'S', per_day);
- /* printf("start_ord: %d\n", start_ord); */
- return (double) ( unit * (ordinal - start_ord));
- /* if (ordinal >= 0) { */
- /* } */
- /* else { */
- /* return (double) (unit * mod_compat(ordinal - start_ord, per_day)); */
- /* } */
+ start_ord = asfreq_DtoHIGHFREQ(daily_ord, "S", per_day);
+ return unit * (ordinal - start_ord);
}
/* Sets the time part of the DateTime object. */
-static
-int dInfoCalc_SetFromAbsTime(struct date_info *dinfo,
- double abstime)
+static i8
+dInfoCalc_SetFromAbsTime(date_info *dinfo, i8 abstime)
{
- int inttime;
- int hour,minute;
- double second;
-
- inttime = (int)abstime;
- hour = inttime / 3600;
- minute = (inttime % 3600) / 60;
- second = abstime - (double)(hour*3600 + minute*60);
+ i8 hour = abstime / US_PER_HOUR;
+ i8 minute = (abstime % US_PER_HOUR) / US_PER_MINUTE;
+ i8 second = (abstime - (hour * US_PER_HOUR + minute * US_PER_MINUTE)) /
+ US_PER_SECOND;
+ i8 microsecond = abstime - (hour * US_PER_HOUR + minute * US_PER_MINUTE +
+ second * US_PER_SECOND);
dinfo->hour = hour;
dinfo->minute = minute;
dinfo->second = second;
+ dinfo->microsecond = microsecond;
dinfo->abstime = abstime;
@@ -971,29 +1502,30 @@ int dInfoCalc_SetFromAbsTime(struct date_info *dinfo,
}
/* Set the instance's value using the given date and time. calendar
- may be set to the flags: GREGORIAN_CALENDAR, JULIAN_CALENDAR to
+ may be set to the flags: GREGORIAN, JULIAN to
indicate the calendar to be used. */
-static
-int dInfoCalc_SetFromAbsDateTime(struct date_info *dinfo,
- npy_int64 absdate,
- double abstime,
- int calendar)
+static i8
+dInfoCalc_SetFromAbsDateTime(date_info *dinfo, i8 absdate, i8 abstime,
+ enum CalendarType calendar)
{
/* Bounds check */
- Py_AssertWithArg(abstime >= 0.0 && abstime <= SECONDS_PER_DAY,
- PyExc_ValueError,
- "abstime out of range (0.0 - 86400.0): %f",
- abstime);
+ Py_AssertWithArg(abstime >= 0 && abstime <= US_PER_DAY,
+ PyExc_ValueError,
+ "abstime out of range (0, 86400000000): %li",
+ abstime);
/* Calculate the date */
- if (dInfoCalc_SetFromAbsDate(dinfo, absdate, calendar)) goto onError;
+ if (dInfoCalc_SetFromAbsDate(dinfo, absdate, calendar))
+ goto onError;
/* Calculate the time */
- if (dInfoCalc_SetFromAbsTime(dinfo, abstime)) goto onError;
+ if (dInfoCalc_SetFromAbsTime(dinfo, abstime))
+ goto onError;
return 0;
- onError:
+
+onError:
return INT_ERR_CODE;
}
@@ -1001,48 +1533,59 @@ int dInfoCalc_SetFromAbsDateTime(struct date_info *dinfo,
* New pandas API-helper code, to expose to cython
* ------------------------------------------------------------------*/
-npy_int64 asfreq(npy_int64 period_ordinal, int freq1, int freq2, char relation)
+i8
+asfreq(i8 period_ordinal, i8 freq1, i8 freq2, const char* relation)
{
- npy_int64 val;
- freq_conv_func func;
- asfreq_info finfo;
+ freq_conv_func func = get_asfreq_func(freq1, freq2);
- func = get_asfreq_func(freq1, freq2);
+ asfreq_info finfo;
get_asfreq_info(freq1, freq2, &finfo);
- val = (*func)(period_ordinal, relation, &finfo);
+ i8 val = func(period_ordinal, relation, &finfo);
if (val == INT_ERR_CODE) {
- // Py_Error(PyExc_ValueError, "Unable to convert to desired frequency.");
goto onError;
}
+
return val;
+
onError:
return INT_ERR_CODE;
}
/* generate an ordinal in period space */
-npy_int64 get_period_ordinal(int year, int month, int day,
- int hour, int minute, int second,
- int freq)
+i8
+get_period_ordinal(i8 year, i8 month, i8 day, i8 hour, i8 minute, i8 second,
+ i8 microsecond, i8 freq)
{
- npy_int64 absdays, delta;
- npy_int64 weeks, days;
- npy_int64 ordinal, day_adj;
- int freq_group, fmonth, mdiff;
- freq_group = get_freq_group(freq);
+ i8 absdays, delta, weeks, days, ordinal, day_adj, fmonth, mdiff;
+ i8 freq_group = get_freq_group(freq);
+
+ if (freq == FR_USEC) {
+ absdays = absdate_from_ymd(year, month, day);
+ delta = absdays - ORD_OFFSET;
+ return delta * US_PER_DAY + hour * US_PER_HOUR +
+ minute * US_PER_MINUTE + second * US_PER_SECOND + microsecond;
+ }
+
+ if (freq == FR_USEC) {
+ absdays = absdate_from_ymd(year, month, day);
+ delta = absdays - ORD_OFFSET;
+ return delta * US_PER_DAY + hour * US_PER_HOUR +
+ minute * US_PER_MINUTE + second * US_PER_SECOND + microsecond;
+ }
if (freq == FR_SEC) {
absdays = absdate_from_ymd(year, month, day);
delta = (absdays - ORD_OFFSET);
- return (npy_int64)(delta*86400 + hour*3600 + minute*60 + second);
+ return delta*86400 + hour*3600 + minute*60 + second;
}
if (freq == FR_MIN) {
absdays = absdate_from_ymd(year, month, day);
delta = (absdays - ORD_OFFSET);
- return (npy_int64)(delta*1440 + hour*60 + minute);
+ return delta*1440 + hour*60 + minute;
}
if (freq == FR_HR) {
@@ -1051,32 +1594,32 @@ npy_int64 get_period_ordinal(int year, int month, int day,
goto onError;
}
delta = (absdays - ORD_OFFSET);
- return (npy_int64)(delta*24 + hour);
+ return delta*24 + hour;
}
if (freq == FR_DAY)
{
- return (npy_int64) (absdate_from_ymd(year, month, day) - ORD_OFFSET);
+ return absdate_from_ymd(year, month, day) - ORD_OFFSET;
}
if (freq == FR_UND)
{
- return (npy_int64) (absdate_from_ymd(year, month, day) - ORD_OFFSET);
+ return absdate_from_ymd(year, month, day) - ORD_OFFSET;
}
if (freq == FR_BUS)
{
- if((days = absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
+ if ((days = absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
{
goto onError;
}
weeks = days / 7;
- return (npy_int64)(days - weeks * 2) - BDAY_OFFSET;
+ return (days - weeks * 2) - BDAY_OFFSET;
}
if (freq_group == FR_WK)
{
- if((ordinal = (npy_int64)absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
+ if ((ordinal = absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
{
goto onError;
}
@@ -1091,26 +1634,26 @@ npy_int64 get_period_ordinal(int year, int month, int day,
if (freq_group == FR_QTR)
{
- fmonth = freq - FR_QTR;
- if (fmonth == 0) fmonth = 12;
+ fmonth = freq - FR_QTR;
+ if (fmonth == 0) fmonth = 12;
- mdiff = month - fmonth;
- if (mdiff < 0) mdiff += 12;
- if (month >= fmonth) mdiff += 12;
+ mdiff = month - fmonth;
+ if (mdiff < 0) mdiff += 12;
+ if (month >= fmonth) mdiff += 12;
- return (year - BASE_YEAR) * 4 + (mdiff - 1) / 3;
+ return (year - BASE_YEAR) * 4 + (mdiff - 1) / 3;
}
if (freq_group == FR_ANN)
{
- fmonth = freq - FR_ANN;
- if (fmonth == 0) fmonth = 12;
- if (month <= fmonth) {
- return year - BASE_YEAR;
- }
- else {
- return year - BASE_YEAR + 1;
- }
+ fmonth = freq - FR_ANN;
+ if (fmonth == 0) fmonth = 12;
+ if (month <= fmonth) {
+ return year - BASE_YEAR;
+ }
+ else {
+ return year - BASE_YEAR + 1;
+ }
}
Py_Error(PyExc_RuntimeError, "Unable to generate frequency ordinal");
@@ -1120,40 +1663,46 @@ npy_int64 get_period_ordinal(int year, int month, int day,
}
/*
- Returns the proleptic Gregorian ordinal of the date, as an integer.
- This corresponds to the number of days since Jan., 1st, 1AD.
- When the instance has a frequency less than daily, the proleptic date
- is calculated for the last day of the period.
+ Returns the proleptic Gregorian ordinal of the date, as an i8eger.
+ This corresponds to the number of days since Jan., 1st, 1AD.
+ When the instance has a frequency less than daily, the proleptic date
+ is calculated for the last day of the period.
*/
-npy_int64 get_python_ordinal(npy_int64 period_ordinal, int freq)
+i8
+get_python_ordinal(i8 period_ordinal, i8 freq)
{
- asfreq_info af_info;
- npy_int64 (*toDaily)(npy_int64, char, asfreq_info*);
-
if (freq == FR_DAY)
return period_ordinal + ORD_OFFSET;
- toDaily = get_asfreq_func(freq, FR_DAY);
+ i8 (*toDaily)(i8, const char*, asfreq_info*) = get_asfreq_func(freq,
+ FR_DAY);
+ asfreq_info af_info;
+
get_asfreq_info(freq, FR_DAY, &af_info);
- return toDaily(period_ordinal, 'E', &af_info) + ORD_OFFSET;
+
+ return toDaily(period_ordinal, "E", &af_info) + ORD_OFFSET;
}
-char *str_replace(const char *s, const char *old, const char *new) {
- char *ret;
- int i, count = 0;
+char*
+str_replace(const char *s, const char *old, const char *new)
+{
+ char *ret = NULL;
+ i8 i, count = 0;
size_t newlen = strlen(new);
size_t oldlen = strlen(old);
for (i = 0; s[i] != '\0'; i++) {
if (strstr(&s[i], old) == &s[i]) {
- count++;
- i += oldlen - 1;
+ ++count;
+ i += oldlen - 1;
}
}
ret = PyArray_malloc(i + 1 + count * (newlen - oldlen));
- if (ret == NULL) {return (char *)PyErr_NoMemory();}
+ if (ret == NULL)
+ return (char*) PyErr_NoMemory();
+
i = 0;
while (*s) {
@@ -1165,6 +1714,7 @@ char *str_replace(const char *s, const char *old, const char *new) {
ret[i++] = *s++;
}
}
+
ret[i] = '\0';
return ret;
@@ -1173,84 +1723,86 @@ char *str_replace(const char *s, const char *old, const char *new) {
// function to generate a nice string representation of the period
// object, originally from DateObject_strftime
-char* c_strftime(struct date_info *tmp, char *fmt) {
+char*
+c_strftime(date_info *tmp, char *fmt)
+{
struct tm c_date;
- char* result;
- struct date_info dinfo = *tmp;
- int result_len = strlen(fmt) + 50;
+ date_info dinfo = *tmp;
- c_date.tm_sec = (int)dinfo.second;
+ size_t result_len = strlen(fmt) + 50L;
+
+ c_date.tm_sec = dinfo.second;
c_date.tm_min = dinfo.minute;
c_date.tm_hour = dinfo.hour;
c_date.tm_mday = dinfo.day;
- c_date.tm_mon = dinfo.month - 1;
- c_date.tm_year = dinfo.year - 1900;
- c_date.tm_wday = (dinfo.day_of_week + 1) % 7;
- c_date.tm_yday = dinfo.day_of_year - 1;
- c_date.tm_isdst = -1;
-
- result = malloc(result_len * sizeof(char));
-
+ c_date.tm_mon = dinfo.month - 1L;
+ c_date.tm_year = dinfo.year - 1900L;
+ c_date.tm_wday = (dinfo.day_of_week + 1L) % 7L;
+ c_date.tm_yday = dinfo.day_of_year - 1L;
+ c_date.tm_isdst = -1L;
+
+ // this must be freed by the caller (right now this is in cython)
+ char* result = malloc(result_len * sizeof(char));
strftime(result, result_len, fmt, &c_date);
-
return result;
}
-int get_yq(npy_int64 ordinal, int freq, int *quarter, int *year) {
+
+i8
+get_yq(i8 ordinal, i8 freq, i8 *quarter, i8 *year)
+{
asfreq_info af_info;
- int qtr_freq;
- npy_int64 daily_ord;
- npy_int64 (*toDaily)(npy_int64, char, asfreq_info*) = NULL;
+ i8 qtr_freq, daily_ord;
+ i8 (*toDaily)(i8, const char*, asfreq_info*) = get_asfreq_func(freq, FR_DAY);
- toDaily = get_asfreq_func(freq, FR_DAY);
get_asfreq_info(freq, FR_DAY, &af_info);
- daily_ord = toDaily(ordinal, 'E', &af_info);
+ daily_ord = toDaily(ordinal, "E", &af_info);
+
+ qtr_freq = get_freq_group(freq) == FR_QTR ? freq : FR_QTR;
- if (get_freq_group(freq) == FR_QTR) {
- qtr_freq = freq;
- } else { qtr_freq = FR_QTR; }
get_asfreq_info(FR_DAY, qtr_freq, &af_info);
- if(DtoQ_yq(daily_ord, &af_info, year, quarter) == INT_ERR_CODE)
+ if (DtoQ_yq(daily_ord, &af_info, year, quarter) == INT_ERR_CODE)
return -1;
return 0;
}
-
-
-
-static int _quarter_year(npy_int64 ordinal, int freq, int *year, int *quarter) {
+static i8
+_quarter_year(i8 ordinal, i8 freq, i8 *year, i8 *quarter)
+{
asfreq_info af_info;
- int qtr_freq;
-
- ordinal = get_python_ordinal(ordinal, freq) - ORD_OFFSET;
- if (get_freq_group(freq) == FR_QTR)
- qtr_freq = freq;
- else
- qtr_freq = FR_QTR;
+ i8 ord = get_python_ordinal(ordinal, freq) - ORD_OFFSET;
+ i8 qtr_freq = get_freq_group(freq) == FR_QTR ? freq : FR_QTR;
get_asfreq_info(FR_DAY, qtr_freq, &af_info);
- if (DtoQ_yq(ordinal, &af_info, year, quarter) == INT_ERR_CODE)
+ if (DtoQ_yq(ord, &af_info, year, quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
- if ((qtr_freq % 1000) > 12)
+ if (qtr_freq % 1000 > 12)
*year -= 1;
return 0;
}
-static int _ISOWeek(struct date_info *dinfo)
+
+static i8
+_ISOWeek(date_info *dinfo)
{
- int week;
+ i8 week;
/* Estimate */
- week = (dinfo->day_of_year-1) - dinfo->day_of_week + 3;
- if (week >= 0) week = week / 7 + 1;
+ week = (dinfo->day_of_year - 1) - dinfo->day_of_week + 3;
+
+ if (week >= 0) {
+ week /= 7;
+ ++week;
+ }
+
/* Verify */
if (week < 0) {
@@ -1261,8 +1813,8 @@ static int _ISOWeek(struct date_info *dinfo)
else
week = 52;
} else if (week == 53) {
- /* Check if the week belongs to year or year+1 */
- if (31-dinfo->day + dinfo->day_of_week < 3) {
+ /* Check if the week belongs to year or year+1 */
+ if (31 - dinfo->day + dinfo->day_of_week < 3) {
week = 1;
}
}
@@ -1270,102 +1822,176 @@ static int _ISOWeek(struct date_info *dinfo)
return week;
}
-int get_date_info(npy_int64 ordinal, int freq, struct date_info *dinfo)
+
+i8
+get_date_info(i8 ordinal, i8 freq, date_info *dinfo)
{
- npy_int64 absdate = get_python_ordinal(ordinal, freq);
- /* printf("freq: %d, absdate: %d\n", freq, (int) absdate); */
- double abstime = get_abs_time(freq, absdate - ORD_OFFSET, ordinal);
- if (abstime < 0) {
- abstime += 86400;
- absdate -= 1;
- }
+ i8 absdate = get_python_ordinal(ordinal, freq);
+ i8 abstime = get_abs_time(freq, absdate - ORD_OFFSET, ordinal);
+
+ if (abstime < 0) {
+ // add a day's worth of us to time
+ abstime += US_PER_DAY;
+
+ // subtract a day from the date
+ --absdate;
+ }
- if(dInfoCalc_SetFromAbsDateTime(dinfo, absdate,
- abstime, GREGORIAN_CALENDAR))
+ if (dInfoCalc_SetFromAbsDateTime(dinfo, absdate, abstime, GREGORIAN))
return INT_ERR_CODE;
return 0;
}
-int pyear(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
+i8
+pyear(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
get_date_info(ordinal, freq, &dinfo);
return dinfo.year;
}
-int pqyear(npy_int64 ordinal, int freq) {
- int year, quarter;
- if( _quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
+i8
+pqyear(i8 ordinal, i8 freq)
+{
+ i8 year, quarter;
+ if (_quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
return year;
}
-int pquarter(npy_int64 ordinal, int freq) {
- int year, quarter;
- if(_quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
+i8
+pquarter(i8 ordinal, i8 freq)
+{
+ i8 year, quarter;
+ if (_quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
return quarter;
}
-int pmonth(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pmonth(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.month;
}
-int pday(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pday(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day;
}
-int pweekday(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pweekday(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day_of_week;
}
-int pday_of_week(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pday_of_week(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day_of_week;
}
-int pday_of_year(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pday_of_year(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day_of_year;
}
-int pweek(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pweek(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return _ISOWeek(&dinfo);
}
-int phour(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+phour(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.hour;
}
-int pminute(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pminute(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.minute;
}
-int psecond(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+psecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.second;
+}
+
+i8
+pmicrosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.microsecond;
+}
+
+i8
+pnanosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.nanosecond;
+}
+
+
+i8
+ppicosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.picosecond;
+}
+
+i8
+pfemtosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.femtosecond;
+}
+
+i8
+pattosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
- return (int)dinfo.second;
+ return dinfo.attosecond;
}
diff --git a/pandas/src/period.h b/pandas/src/period.h
index 4ba92bf8fde41..867daf597dbca 100644
--- a/pandas/src/period.h
+++ b/pandas/src/period.h
@@ -12,148 +12,215 @@
#include "headers/stdint.h"
#include "limits.h"
+
/*
* declarations from period here
*/
-#define GREGORIAN_CALENDAR 0
-#define JULIAN_CALENDAR 1
+typedef int64_t i8;
+
+enum CalendarType
+{
+ GREGORIAN,
+ JULIAN
+};
+
+static const i8 SECONDS_PER_DAY = 86400;
-#define SECONDS_PER_DAY ((double) 86400.0)
+#define Py_AssertWithArg(x, errortype, errorstr, a1) \
+ { \
+ if (!((x))) { \
+ PyErr_Format((errortype), (errorstr), (a1)); \
+ goto onError; \
+ } \
+ }
+
+#define Py_Error(errortype, errorstr) \
+ { \
+ PyErr_SetString((errortype), (errorstr)); \
+ goto onError; \
+ }
-#define Py_AssertWithArg(x,errortype,errorstr,a1) {if (!(x)) {PyErr_Format(errortype,errorstr,a1);goto onError;}}
-#define Py_Error(errortype,errorstr) {PyErr_SetString(errortype,errorstr);goto onError;}
/*** FREQUENCY CONSTANTS ***/
// HIGHFREQ_ORIG is the datetime ordinal from which to begin the second
// frequency ordinal sequence
-// typedef int64_t npy_int64;
+// typedef int64_t i8;
// begins second ordinal at 1/1/1970 unix epoch
// #define HIGHFREQ_ORIG 62135683200LL
-#define BASE_YEAR 1970
-#define ORD_OFFSET 719163LL // days until 1970-01-01
-#define BDAY_OFFSET 513689LL // days until 1970-01-01
-#define WEEK_OFFSET 102737LL
-#define HIGHFREQ_ORIG 0 // ORD_OFFSET * 86400LL // days until 1970-01-01
-
-#define FR_ANN 1000 /* Annual */
-#define FR_ANNDEC FR_ANN /* Annual - December year end*/
-#define FR_ANNJAN 1001 /* Annual - January year end*/
-#define FR_ANNFEB 1002 /* Annual - February year end*/
-#define FR_ANNMAR 1003 /* Annual - March year end*/
-#define FR_ANNAPR 1004 /* Annual - April year end*/
-#define FR_ANNMAY 1005 /* Annual - May year end*/
-#define FR_ANNJUN 1006 /* Annual - June year end*/
-#define FR_ANNJUL 1007 /* Annual - July year end*/
-#define FR_ANNAUG 1008 /* Annual - August year end*/
-#define FR_ANNSEP 1009 /* Annual - September year end*/
-#define FR_ANNOCT 1010 /* Annual - October year end*/
-#define FR_ANNNOV 1011 /* Annual - November year end*/
+static const i8 BASE_YEAR = 1970;
+static const i8 ORD_OFFSET = 719163; // days until 1970-01-01
+static const i8 BDAY_OFFSET = 513689; // days until 1970-01-01
+static const i8 WEEK_OFFSET = 102737;
+static const i8 HIGHFREQ_ORIG = 0; // ORD_OFFSET * 86400LL // days until 1970-01-01
+
+enum Annual
+{
+ FR_ANN = 1000, /* Annual */
+ FR_ANNDEC = FR_ANN, /* Annual - December year end*/
+ FR_ANNJAN, /* Annual - January year end*/
+ FR_ANNFEB, /* Annual - February year end*/
+ FR_ANNMAR, /* Annual - March year end*/
+ FR_ANNAPR, /* Annual - April year end*/
+ FR_ANNMAY, /* Annual - May year end*/
+ FR_ANNJUN, /* Annual - June year end*/
+ FR_ANNJUL, /* Annual - July year end*/
+ FR_ANNAUG, /* Annual - August year end*/
+ FR_ANNSEP, /* Annual - September year end*/
+ FR_ANNOCT, /* Annual - October year end*/
+ FR_ANNNOV /* Annual - November year end*/
+};
+
/* The standard quarterly frequencies with various fiscal year ends
eg, Q42005 for Q@OCT runs Aug 1, 2005 to Oct 31, 2005 */
-#define FR_QTR 2000 /* Quarterly - December year end (default quarterly) */
-#define FR_QTRDEC FR_QTR /* Quarterly - December year end */
-#define FR_QTRJAN 2001 /* Quarterly - January year end */
-#define FR_QTRFEB 2002 /* Quarterly - February year end */
-#define FR_QTRMAR 2003 /* Quarterly - March year end */
-#define FR_QTRAPR 2004 /* Quarterly - April year end */
-#define FR_QTRMAY 2005 /* Quarterly - May year end */
-#define FR_QTRJUN 2006 /* Quarterly - June year end */
-#define FR_QTRJUL 2007 /* Quarterly - July year end */
-#define FR_QTRAUG 2008 /* Quarterly - August year end */
-#define FR_QTRSEP 2009 /* Quarterly - September year end */
-#define FR_QTROCT 2010 /* Quarterly - October year end */
-#define FR_QTRNOV 2011 /* Quarterly - November year end */
-
-#define FR_MTH 3000 /* Monthly */
-
-#define FR_WK 4000 /* Weekly */
-#define FR_WKSUN FR_WK /* Weekly - Sunday end of week */
-#define FR_WKMON 4001 /* Weekly - Monday end of week */
-#define FR_WKTUE 4002 /* Weekly - Tuesday end of week */
-#define FR_WKWED 4003 /* Weekly - Wednesday end of week */
-#define FR_WKTHU 4004 /* Weekly - Thursday end of week */
-#define FR_WKFRI 4005 /* Weekly - Friday end of week */
-#define FR_WKSAT 4006 /* Weekly - Saturday end of week */
-
-#define FR_BUS 5000 /* Business days */
-#define FR_DAY 6000 /* Daily */
-#define FR_HR 7000 /* Hourly */
-#define FR_MIN 8000 /* Minutely */
-#define FR_SEC 9000 /* Secondly */
-
-#define FR_UND -10000 /* Undefined */
-
-#define INT_ERR_CODE INT32_MIN
-
-#define MEM_CHECK(item) if (item == NULL) { return PyErr_NoMemory(); }
-#define ERR_CHECK(item) if (item == NULL) { return NULL; }
-
-typedef struct asfreq_info {
- int from_week_end; // day the week ends on in the "from" frequency
- int to_week_end; // day the week ends on in the "to" frequency
-
- int from_a_year_end; // month the year ends on in the "from" frequency
- int to_a_year_end; // month the year ends on in the "to" frequency
-
- int from_q_year_end; // month the year ends on in the "from" frequency
- int to_q_year_end; // month the year ends on in the "to" frequency
+enum Quarterly
+{
+ FR_QTR = 2000, /* Quarterly - December year end (default quarterly) */
+ FR_QTRDEC = FR_QTR, /* Quarterly - December year end */
+ FR_QTRJAN, /* Quarterly - January year end */
+ FR_QTRFEB, /* Quarterly - February year end */
+ FR_QTRMAR, /* Quarterly - March year end */
+ FR_QTRAPR, /* Quarterly - April year end */
+ FR_QTRMAY, /* Quarterly - May year end */
+ FR_QTRJUN, /* Quarterly - June year end */
+ FR_QTRJUL, /* Quarterly - July year end */
+ FR_QTRAUG, /* Quarterly - August year end */
+ FR_QTRSEP, /* Quarterly - September year end */
+ FR_QTROCT, /* Quarterly - October year end */
+ FR_QTRNOV /* Quarterly - November year end */
+};
+
+/* #define FR_MTH 3000 /\* Monthly *\/ */
+
+enum Monthly
+{
+ FR_MTH = 3000
+};
+
+
+enum Weekly
+{
+ FR_WK = 4000, /* Weekly */
+ FR_WKSUN = FR_WK, /* Weekly - Sunday end of week */
+ FR_WKMON, /* Weekly - Monday end of week */
+ FR_WKTUE, /* Weekly - Tuesday end of week */
+ FR_WKWED, /* Weekly - Wednesday end of week */
+ FR_WKTHU, /* Weekly - Thursday end of week */
+ FR_WKFRI, /* Weekly - Friday end of week */
+ FR_WKSAT /* Weekly - Saturday end of week */
+};
+
+enum BusinessDaily { FR_BUS = 5000 };
+enum Daily { FR_DAY = 6000 };
+enum Hourly { FR_HR = 7000 };
+enum Minutely { FR_MIN = 8000 };
+enum Secondly { FR_SEC = 9000 };
+enum Microsecondly { FR_USEC = 10000 };
+
+enum Undefined { FR_UND = -10000 };
+
+static const i8 US_PER_SECOND = 1000000L;
+static const i8 US_PER_MINUTE = 60 * 1000000L;
+static const i8 US_PER_HOUR = 60 * 60 * 1000000L;
+static const i8 US_PER_DAY = 24 * 60 * 60 * 1000000L;
+static const i8 US_PER_WEEK = 7 * 24 * 60 * 60 * 1000000L;
+
+static const i8 NS_PER_SECOND = 1000000000L;
+static const i8 NS_PER_MINUTE = 60 * 1000000000L;
+static const i8 NS_PER_HOUR = 60 * 60 * 1000000000L;
+static const i8 NS_PER_DAY = 24 * 60 * 60 * 1000000000L;
+static const i8 NS_PER_WEEK = 7 * 24 * 60 * 60 * 1000000000L;
+
+// make sure INT64_MIN is a macro!
+static const i8 INT_ERR_CODE = INT64_MIN;
+
+#define MEM_CHECK(item) \
+ { \
+ if (item == NULL) \
+ return PyErr_NoMemory(); \
+ }
+
+#define ERR_CHECK(item) \
+ { \
+ if (item == NULL) \
+ return NULL; \
+ }
+
+typedef struct asfreq_info
+{
+ i8 from_week_end; // day the week ends on in the "from" frequency
+ i8 to_week_end; // day the week ends on in the "to" frequency
+
+ i8 from_a_year_end; // month the year ends on in the "from" frequency
+ i8 to_a_year_end; // month the year ends on in the "to" frequency
+
+ i8 from_q_year_end; // month the year ends on in the "from" frequency
+ i8 to_q_year_end; // month the year ends on in the "to" frequency
} asfreq_info;
-typedef struct date_info {
- npy_int64 absdate;
- double abstime;
-
- double second;
- int minute;
- int hour;
- int day;
- int month;
- int quarter;
- int year;
- int day_of_week;
- int day_of_year;
- int calendar;
+typedef struct date_info
+{
+ i8 absdate;
+ i8 abstime;
+
+ i8 attosecond;
+ i8 femtosecond;
+ i8 picosecond;
+ i8 nanosecond;
+ i8 microsecond;
+ i8 second;
+ i8 minute;
+ i8 hour;
+ i8 day;
+ i8 month;
+ i8 quarter;
+ i8 year;
+ i8 day_of_week;
+ i8 day_of_year;
+ i8 calendar;
} date_info;
-typedef npy_int64 (*freq_conv_func)(npy_int64, char, asfreq_info*);
+typedef i8 (*freq_conv_func)(i8, const char*, asfreq_info*);
/*
* new pandas API helper functions here
*/
-npy_int64 asfreq(npy_int64 period_ordinal, int freq1, int freq2, char relation);
-
-npy_int64 get_period_ordinal(int year, int month, int day,
- int hour, int minute, int second,
- int freq);
-
-npy_int64 get_python_ordinal(npy_int64 period_ordinal, int freq);
-
-int get_date_info(npy_int64 ordinal, int freq, struct date_info *dinfo);
-freq_conv_func get_asfreq_func(int fromFreq, int toFreq);
-void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info);
-
-int pyear(npy_int64 ordinal, int freq);
-int pqyear(npy_int64 ordinal, int freq);
-int pquarter(npy_int64 ordinal, int freq);
-int pmonth(npy_int64 ordinal, int freq);
-int pday(npy_int64 ordinal, int freq);
-int pweekday(npy_int64 ordinal, int freq);
-int pday_of_week(npy_int64 ordinal, int freq);
-int pday_of_year(npy_int64 ordinal, int freq);
-int pweek(npy_int64 ordinal, int freq);
-int phour(npy_int64 ordinal, int freq);
-int pminute(npy_int64 ordinal, int freq);
-int psecond(npy_int64 ordinal, int freq);
+i8 asfreq(i8 period_ordinal, i8 freq1, i8 freq2, const char* relation);
+
+i8 get_period_ordinal(i8 year, i8 month, i8 day, i8 hour, i8 minute, i8 second,
+ i8 microsecond, i8 freq);
+
+i8 get_python_ordinal(i8 period_ordinal, i8 freq);
+
+i8 get_date_info(i8 ordinal, i8 freq, struct date_info *dinfo);
+freq_conv_func get_asfreq_func(i8 fromFreq, i8 toFreq);
+void get_asfreq_info(i8 fromFreq, i8 toFreq, asfreq_info *af_info);
+
+i8 pyear(i8 ordinal, i8 freq);
+i8 pqyear(i8 ordinal, i8 freq);
+i8 pquarter(i8 ordinal, i8 freq);
+i8 pmonth(i8 ordinal, i8 freq);
+i8 pday(i8 ordinal, i8 freq);
+i8 pweekday(i8 ordinal, i8 freq);
+i8 pday_of_week(i8 ordinal, i8 freq);
+i8 pday_of_year(i8 ordinal, i8 freq);
+i8 pweek(i8 ordinal, i8 freq);
+i8 phour(i8 ordinal, i8 freq);
+i8 pminute(i8 ordinal, i8 freq);
+i8 psecond(i8 ordinal, i8 freq);
+i8 pmicrosecond(i8 ordinal, i8 freq);
+i8 pnanosecond(i8 ordinal, i8 freq);
+i8 ppicosecond(i8 ordinal, i8 freq);
+i8 pfemtosecond(i8 ordinal, i8 freq);
+i8 pattosecond(i8 ordinal, i8 freq);
-double getAbsTime(int freq, npy_int64 dailyDate, npy_int64 originalDate);
char *c_strftime(struct date_info *dinfo, char *fmt);
-int get_yq(npy_int64 ordinal, int freq, int *quarter, int *year);
+i8 get_yq(i8 ordinal, i8 freq, i8 *quarter, i8 *year);
#endif
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index dc0df89d1ef9c..bccef5e2e6a77 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -52,12 +52,15 @@ class TimeConverter(units.ConversionInterface):
def convert(value, unit, axis):
valid_types = (str, pydt.time)
if (isinstance(value, valid_types) or com.is_integer(value) or
- com.is_float(value)):
+ com.is_float(value)):
return time2num(value)
+
if isinstance(value, Index):
return value.map(time2num)
+
if isinstance(value, (list, tuple, np.ndarray)):
return [time2num(x) for x in value]
+
return value
@staticmethod
@@ -95,7 +98,6 @@ def __call__(self, x, pos=0):
return pydt.time(h, m, s, us).strftime(fmt)
-
### Period Conversion
@@ -107,7 +109,7 @@ def convert(values, units, axis):
raise TypeError('Axis must have `freq` set to convert to Periods')
valid_types = (str, datetime, Period, pydt.date, pydt.time)
if (isinstance(values, valid_types) or com.is_integer(values) or
- com.is_float(values)):
+ com.is_float(values)):
return get_datevalue(values, axis.freq)
if isinstance(values, Index):
return values.map(lambda x: get_datevalue(x, axis.freq))
@@ -286,19 +288,19 @@ def __call__(self):
if dmin > dmax:
dmax, dmin = dmin, dmax
- delta = relativedelta(dmax, dmin)
+ # delta = relativedelta(dmax, dmin)
# We need to cap at the endpoints of valid datetime
- try:
- start = dmin - delta
- except ValueError:
- start = _from_ordinal(1.0)
+ # try:
+ # start = dmin - delta
+ # except ValueError:
+ # start = _from_ordinal(1.0)
- try:
- stop = dmax + delta
- except ValueError:
- # The magic number!
- stop = _from_ordinal(3652059.9999999)
+ # try:
+ # stop = dmax + delta
+ # except ValueError:
+ # # The magic number!
+ # stop = _from_ordinal(3652059.9999999)
nmax, nmin = dates.date2num((dmax, dmin))
@@ -318,7 +320,7 @@ def __call__(self):
raise RuntimeError(('MillisecondLocator estimated to generate %d '
'ticks from %s to %s: exceeds Locator.MAXTICKS'
'* 2 (%d) ') %
- (estimate, dmin, dmax, self.MAXTICKS * 2))
+ (estimate, dmin, dmax, self.MAXTICKS * 2))
freq = '%dL' % self._get_interval()
tz = self.tz.tzname(None)
@@ -330,7 +332,7 @@ def __call__(self):
if len(all_dates) > 0:
locs = self.raise_if_exceeds(dates.date2num(all_dates))
return locs
- except Exception, e: # pragma: no cover
+ except Exception: # pragma: no cover
pass
lims = dates.date2num([dmin, dmax])
@@ -347,19 +349,19 @@ def autoscale(self):
if dmin > dmax:
dmax, dmin = dmin, dmax
- delta = relativedelta(dmax, dmin)
+ # delta = relativedelta(dmax, dmin)
# We need to cap at the endpoints of valid datetime
- try:
- start = dmin - delta
- except ValueError:
- start = _from_ordinal(1.0)
+ # try:
+ # start = dmin - delta
+ # except ValueError:
+ # start = _from_ordinal(1.0)
- try:
- stop = dmax + delta
- except ValueError:
- # The magic number!
- stop = _from_ordinal(3652059.9999999)
+ # try:
+ # stop = dmax + delta
+ # except ValueError:
+ # # The magic number!
+ # stop = _from_ordinal(3652059.9999999)
dmin, dmax = self.datalim_to_dt()
@@ -454,7 +456,9 @@ def _daily_finder(vmin, vmax, freq):
periodsperday = -1
if freq >= FreqGroup.FR_HR:
- if freq == FreqGroup.FR_SEC:
+ if freq == FreqGroup.FR_USEC:
+ periodsperday = 24 * 60 * 60 * 1000000
+ elif freq == FreqGroup.FR_SEC:
periodsperday = 24 * 60 * 60
elif freq == FreqGroup.FR_MIN:
periodsperday = 24 * 60
@@ -497,11 +501,8 @@ def _daily_finder(vmin, vmax, freq):
info_fmt = info['fmt']
def first_label(label_flags):
- if (label_flags[0] == 0) and (label_flags.size > 1) and \
- ((vmin_orig % 1) > 0.0):
- return label_flags[1]
- else:
- return label_flags[0]
+ conds = label_flags[0] == 0, label_flags.size > 1, vmin_orig % 1 > 0.0
+ return label_flags[int(all(conds))]
# Case 1. Less than a month
if span <= periodspermonth:
@@ -543,8 +544,7 @@ def _second_finder(label_interval):
info['min'][second_start & (_second % label_interval == 0)] = True
year_start = period_break(dates_, 'year')
info_fmt = info['fmt']
- info_fmt[second_start & (_second %
- label_interval == 0)] = '%H:%M:%S'
+ info_fmt[second_start & (_second % label_interval == 0)] = '%H:%M:%S'
info_fmt[day_start] = '%H:%M:%S\n%d-%b'
info_fmt[year_start] = '%H:%M:%S\n%d-%b\n%Y'
@@ -588,6 +588,7 @@ def _second_finder(label_interval):
info_fmt[day_start] = '%d'
info_fmt[month_start] = '%d\n%b'
info_fmt[year_start] = '%d\n%b\n%Y'
+
if not has_level_label(year_start, vmin_orig):
if not has_level_label(month_start, vmin_orig):
info_fmt[first_label(day_start)] = '%d\n%b\n%Y'
@@ -902,11 +903,10 @@ def autoscale(self):
vmax += 1
return nonsingular(vmin, vmax)
+
#####-------------------------------------------------------------------------
#---- --- Formatter ---
#####-------------------------------------------------------------------------
-
-
class TimeSeries_DateFormatter(Formatter):
"""
Formats the ticks along an axis controlled by a :class:`PeriodIndex`.
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 3bf29af8581a9..16a84fbfbe8cd 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -22,6 +22,7 @@ class FreqGroup(object):
FR_HR = 7000
FR_MIN = 8000
FR_SEC = 9000
+ FR_USEC = 10000
class Resolution(object):
@@ -34,11 +35,11 @@ class Resolution(object):
@classmethod
def get_str(cls, reso):
- return {RESO_US: 'microsecond',
- RESO_SEC: 'second',
- RESO_MIN: 'minute',
- RESO_HR: 'hour',
- RESO_DAY: 'day'}.get(reso, 'day')
+ return {cls.RESO_US: 'microsecond',
+ cls.RESO_SEC: 'second',
+ cls.RESO_MIN: 'minute',
+ cls.RESO_HR: 'hour',
+ cls.RESO_DAY: 'day'}.get(reso, 'day')
def get_reso_string(reso):
@@ -48,8 +49,10 @@ def get_reso_string(reso):
def get_to_timestamp_base(base):
if base <= FreqGroup.FR_WK:
return FreqGroup.FR_DAY
+
if FreqGroup.FR_HR <= base <= FreqGroup.FR_SEC:
return FreqGroup.FR_SEC
+
return base
@@ -57,6 +60,7 @@ def get_freq_group(freq):
if isinstance(freq, basestring):
base, mult = get_freq_code(freq)
freq = base
+
return (freq // 1000) * 1000
@@ -64,28 +68,29 @@ def get_freq(freq):
if isinstance(freq, basestring):
base, mult = get_freq_code(freq)
freq = base
+
return freq
def get_freq_code(freqstr):
"""
-
Parameters
----------
+ freqstr : str
Returns
-------
+ code, stride
"""
if isinstance(freqstr, DateOffset):
freqstr = (get_offset_name(freqstr), freqstr.n)
if isinstance(freqstr, tuple):
- if (com.is_integer(freqstr[0]) and
- com.is_integer(freqstr[1])):
- # e.g., freqstr = (2000, 1)
+ if (com.is_integer(freqstr[0]) and com.is_integer(freqstr[1])):
+ #e.g., freqstr = (2000, 1)
return freqstr
else:
- # e.g., freqstr = ('T', 5)
+ #e.g., freqstr = ('T', 5)
try:
code = _period_str_to_code(freqstr[0])
stride = freqstr[1]
@@ -285,7 +290,8 @@ def _get_freq_str(base, mult=1):
'Q': 'Q',
'A': 'A',
'W': 'W',
- 'M': 'M'
+ 'M': 'M',
+ 'U': 'U',
}
need_suffix = ['QS', 'BQ', 'BQS', 'AS', 'BA', 'BAS']
@@ -598,6 +604,7 @@ def get_standard_freq(freq):
"H": 7000, # Hourly
"T": 8000, # Minutely
"S": 9000, # Secondly
+ "U": 10000, # Microsecondly
}
_reverse_period_code_map = {}
@@ -625,6 +632,7 @@ def _period_alias_dictionary():
H_aliases = ["H", "HR", "HOUR", "HRLY", "HOURLY"]
T_aliases = ["T", "MIN", "MINUTE", "MINUTELY"]
S_aliases = ["S", "SEC", "SECOND", "SECONDLY"]
+ U_aliases = ["U", "USEC", "MICROSECOND", "MICROSECONDLY"]
for k in M_aliases:
alias_dict[k] = 'M'
@@ -644,6 +652,9 @@ def _period_alias_dictionary():
for k in S_aliases:
alias_dict[k] = 'S'
+ for k in U_aliases:
+ alias_dict[k] = 'U'
+
A_prefixes = ["A", "Y", "ANN", "ANNUAL", "ANNUALLY", "YR", "YEAR",
"YEARLY"]
@@ -711,6 +722,7 @@ def _period_alias_dictionary():
"hour": "H",
"minute": "T",
"second": "S",
+ "microsecond": "U",
}
@@ -1002,23 +1014,25 @@ def is_subperiod(source, target):
if _is_quarterly(source):
return _quarter_months_conform(_get_rule_month(source),
_get_rule_month(target))
- return source in ['D', 'B', 'M', 'H', 'T', 'S']
+ return source in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif _is_quarterly(target):
- return source in ['D', 'B', 'M', 'H', 'T', 'S']
+ return source in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif target == 'M':
- return source in ['D', 'B', 'H', 'T', 'S']
+ return source in ['D', 'B', 'H', 'T', 'S', 'U']
elif _is_weekly(target):
- return source in [target, 'D', 'B', 'H', 'T', 'S']
+ return source in [target, 'D', 'B', 'H', 'T', 'S', 'U']
elif target == 'B':
- return source in ['B', 'H', 'T', 'S']
+ return source in ['B', 'H', 'T', 'S', 'U']
elif target == 'D':
- return source in ['D', 'H', 'T', 'S']
+ return source in ['D', 'H', 'T', 'S', 'U']
elif target == 'H':
- return source in ['H', 'T', 'S']
+ return source in ['H', 'T', 'S', 'U']
elif target == 'T':
- return source in ['T', 'S']
+ return source in ['T', 'S', 'U']
elif target == 'S':
- return source in ['S']
+ return source in ['S', 'U']
+ elif target == 'U':
+ return source == target
def is_superperiod(source, target):
@@ -1053,23 +1067,25 @@ def is_superperiod(source, target):
smonth = _get_rule_month(source)
tmonth = _get_rule_month(target)
return _quarter_months_conform(smonth, tmonth)
- return target in ['D', 'B', 'M', 'H', 'T', 'S']
+ return target in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif _is_quarterly(source):
- return target in ['D', 'B', 'M', 'H', 'T', 'S']
+ return target in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif source == 'M':
- return target in ['D', 'B', 'H', 'T', 'S']
+ return target in ['D', 'B', 'H', 'T', 'S', 'U']
elif _is_weekly(source):
- return target in [source, 'D', 'B', 'H', 'T', 'S']
+ return target in [source, 'D', 'B', 'H', 'T', 'S', 'U']
elif source == 'B':
- return target in ['D', 'B', 'H', 'T', 'S']
+ return target in ['D', 'B', 'H', 'T', 'S', 'U']
elif source == 'D':
- return target in ['D', 'B', 'H', 'T', 'S']
+ return target in ['D', 'B', 'H', 'T', 'S', 'U']
elif source == 'H':
- return target in ['H', 'T', 'S']
+ return target in ['H', 'T', 'S', 'U']
elif source == 'T':
- return target in ['T', 'S']
+ return target in ['T', 'S', 'U']
elif source == 'S':
- return target in ['S']
+ return target in ['S', 'U']
+ elif source == 'U':
+ return target == source
def _get_rule_month(source, default='DEC'):
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 24594d1bea9d6..f2cf1e0d899ae 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -116,7 +116,7 @@ def __repr__(self):
attrs = []
for attr in self.__dict__:
if ((attr == 'kwds' and len(self.kwds) == 0)
- or attr.startswith('_')):
+ or attr.startswith('_')):
continue
if attr not in exclude:
attrs.append('='.join((attr, repr(getattr(self, attr)))))
@@ -415,8 +415,7 @@ def apply(self, other):
n = self.n
wkday, days_in_month = tslib.monthrange(other.year, other.month)
- lastBDay = days_in_month - max(((wkday + days_in_month - 1)
- % 7) - 4, 0)
+ lastBDay = days_in_month - max(((wkday + days_in_month - 1) % 7) - 4, 0)
if n > 0 and not other.day >= lastBDay:
n = n - 1
@@ -587,7 +586,8 @@ def apply(self, other):
else:
months = self.n + 1
- return self.getOffsetOfMonth(other + relativedelta(months=months, day=1))
+ return self.getOffsetOfMonth(other + relativedelta(months=months,
+ day=1))
def getOffsetOfMonth(self, dt):
w = Week(weekday=self.weekday)
@@ -631,8 +631,7 @@ def apply(self, other):
n = self.n
wkday, days_in_month = tslib.monthrange(other.year, other.month)
- lastBDay = days_in_month - max(((wkday + days_in_month - 1)
- % 7) - 4, 0)
+ lastBDay = days_in_month - max(((wkday + days_in_month - 1) % 7) - 4, 0)
monthsToGo = 3 - ((other.month - self.startingMonth) % 3)
if monthsToGo == 3:
@@ -826,11 +825,11 @@ def apply(self, other):
years = n
if n > 0:
if (other.month < self.month or
- (other.month == self.month and other.day < lastBDay)):
+ (other.month == self.month and other.day < lastBDay)):
years -= 1
elif n <= 0:
if (other.month > self.month or
- (other.month == self.month and other.day > lastBDay)):
+ (other.month == self.month and other.day > lastBDay)):
years += 1
other = other + relativedelta(years=years)
@@ -874,11 +873,11 @@ def apply(self, other):
if n > 0: # roll back first for positive n
if (other.month < self.month or
- (other.month == self.month and other.day < first)):
+ (other.month == self.month and other.day < first)):
years -= 1
elif n <= 0: # roll forward
if (other.month > self.month or
- (other.month == self.month and other.day > first)):
+ (other.month == self.month and other.day > first)):
years += 1
# set first bday for result
@@ -930,7 +929,7 @@ def _decrement(date):
def _rollf(date):
if (date.month != self.month or
- date.day < tslib.monthrange(date.year, date.month)[1]):
+ date.day < tslib.monthrange(date.year, date.month)[1]):
date = _increment(date)
return date
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 75decb91485ca..efd5eb5719404 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -1,12 +1,10 @@
# pylint: disable=E1101,E1103,W0232
-import operator
from datetime import datetime, date
import numpy as np
-import pandas.tseries.offsets as offsets
-from pandas.tseries.frequencies import (get_freq_code as _gfc,
- _month_numbers, FreqGroup)
+from pandas.tseries.frequencies import (get_freq_code as _gfc, _month_numbers,
+ FreqGroup)
from pandas.tseries.index import DatetimeIndex, Int64Index, Index
from pandas.tseries.tools import parse_time_string
import pandas.tseries.frequencies as _freq_mod
@@ -26,7 +24,9 @@ def _period_field_accessor(name, alias):
def f(self):
base, mult = _gfc(self.freq)
return tslib.get_period_field(alias, self.ordinal, base)
+
f.__name__ = name
+
return property(f)
@@ -34,17 +34,19 @@ def _field_accessor(name, alias):
def f(self):
base, mult = _gfc(self.freq)
return tslib.get_period_field_arr(alias, self.values, base)
+
f.__name__ = name
+
return property(f)
class Period(object):
- __slots__ = ['freq', 'ordinal']
+ __slots__ = 'freq', 'ordinal'
def __init__(self, value=None, freq=None, ordinal=None,
year=None, month=1, quarter=None, day=1,
- hour=0, minute=0, second=0):
+ hour=0, minute=0, second=0, microsecond=0):
"""
Represents an period of time
@@ -61,6 +63,7 @@ def __init__(self, value=None, freq=None, ordinal=None,
hour : int, default 0
minute : int, default 0
second : int, default 0
+ microsecond : int, default 0
"""
# freq points to a tuple (base, mult); base is one of the defined
# periods such as A, Q, etc. Every five minutes would be, e.g.,
@@ -72,8 +75,8 @@ def __init__(self, value=None, freq=None, ordinal=None,
self.ordinal = None
if ordinal is not None and value is not None:
- raise ValueError(("Only value or ordinal but not both should be "
- "given but not both"))
+ raise ValueError("Only value or ordinal but not both should be "
+ "given but not both")
elif ordinal is not None:
if not com.is_integer(ordinal):
raise ValueError("Ordinal must be an integer")
@@ -86,7 +89,8 @@ def __init__(self, value=None, freq=None, ordinal=None,
raise ValueError("If value is None, freq cannot be None")
self.ordinal = _ordinal_from_fields(year, month, quarter, day,
- hour, minute, second, freq)
+ hour, minute, second,
+ microsecond, freq)
elif isinstance(value, Period):
other = value
@@ -112,8 +116,8 @@ def __init__(self, value=None, freq=None, ordinal=None,
if freq is None:
raise ValueError('Must supply freq for datetime value')
else:
- msg = "Value must be Period, string, integer, or datetime"
- raise ValueError(msg)
+ raise ValueError("Value must be Period, string, integer, or "
+ "datetime")
base, mult = _gfc(freq)
if mult != 1:
@@ -122,7 +126,7 @@ def __init__(self, value=None, freq=None, ordinal=None,
if self.ordinal is None:
self.ordinal = tslib.period_ordinal(dt.year, dt.month, dt.day,
dt.hour, dt.minute, dt.second,
- base)
+ dt.microsecond, base)
self.freq = _freq_mod._get_freq_str(base)
@@ -168,14 +172,15 @@ def asfreq(self, freq, how='E'):
resampled : Period
"""
how = _validate_end_alias(how)
- base1, mult1 = _gfc(self.freq)
+ base1, _ = _gfc(self.freq)
base2, mult2 = _gfc(freq)
if mult2 != 1:
raise ValueError('Only mult == 1 supported')
- end = how == 'E'
- new_ordinal = tslib.period_asfreq(self.ordinal, base1, base2, end)
+ hows = {'S': 'S', 'E': 'E', 'START': 'S', 'END': 'E'}
+ new_ordinal = tslib.period_asfreq(self.ordinal, base1, base2,
+ hows[how.upper()])
return Period(ordinal=new_ordinal, freq=base2)
@@ -209,10 +214,10 @@ def to_timestamp(self, freq=None, how='start'):
how = _validate_end_alias(how)
if freq is None:
- base, mult = _gfc(self.freq)
+ base, _ = _gfc(self.freq)
freq = _freq_mod.get_to_timestamp_base(base)
- base, mult = _gfc(freq)
+ base, _ = _gfc(freq)
val = self.asfreq(freq, how)
dt64 = tslib.period_ordinal_to_dt64(val.ordinal, base)
@@ -224,6 +229,7 @@ def to_timestamp(self, freq=None, how='start'):
hour = _period_field_accessor('hour', 5)
minute = _period_field_accessor('minute', 6)
second = _period_field_accessor('second', 7)
+ microsecond = _period_field_accessor('microsecond', 11)
weekofyear = _period_field_accessor('week', 8)
week = weekofyear
dayofweek = _period_field_accessor('dayofweek', 10)
@@ -305,6 +311,26 @@ def strftime(self, fmt):
+-----------+--------------------------------+-------+
| ``%S`` | Second as a decimal number | \(4) |
| | [00,61]. | |
+ +-----------|--------------------------------|-------+
+ | ``%u`` | Microsecond as a decimal | |
+ | | number | |
+ | | [000000, 999999] | |
+ +-----------+--------------------------------+-------+
+ | ``%u`` | Microsecond as a decimal number| |
+ | | [00,999999]. | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ +-----------+--------------------------------+-------+
+ | ``%u`` | Microsecond as a decimal number| |
+ | | [00,999999]. | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+-----------+--------------------------------+-------+
| ``%U`` | Week number of the year | \(5) |
| | (Sunday as the first day of | |
@@ -407,6 +433,8 @@ def _get_date_and_freq(value, freq):
freq = 'T'
elif reso == 'second':
freq = 'S'
+ elif reso == 'microsecond':
+ freq = 'U'
else:
raise ValueError("Invalid frequency or could not infer: %s" % reso)
@@ -428,9 +456,8 @@ def dt64arr_to_periodarr(data, freq, tz):
base, mult = _gfc(freq)
return tslib.dt64arr_to_periodarr(data.view('i8'), base, tz)
-# --- Period index sketch
-
+# --- Period index sketch
def _period_index_cmp(opname):
"""
Wrap comparison operations to convert datetime-like to datetime64
@@ -496,6 +523,7 @@ class PeriodIndex(Int64Index):
hour : int or array, default None
minute : int or array, default None
second : int or array, default None
+ microsecond : int or array, default None
tz : object, default None
Timezone for converting datetime64 data to Periods
@@ -518,7 +546,7 @@ def __new__(cls, data=None, ordinal=None,
freq=None, start=None, end=None, periods=None,
copy=False, name=None,
year=None, month=None, quarter=None, day=None,
- hour=None, minute=None, second=None,
+ hour=None, minute=None, second=None, microsecond=None,
tz=None):
freq = _freq_mod.get_standard_freq(freq)
@@ -534,7 +562,8 @@ def __new__(cls, data=None, ordinal=None,
if ordinal is not None:
data = np.asarray(ordinal, dtype=np.int64)
else:
- fields = [year, month, quarter, day, hour, minute, second]
+ fields = [year, month, quarter, day, hour, minute, second,
+ microsecond]
data, freq = cls._generate_range(start, end, periods,
freq, fields)
else:
@@ -556,10 +585,11 @@ def _generate_range(cls, start, end, periods, freq, fields):
'or endpoints, but not both')
subarr, freq = _get_ordinal_range(start, end, periods, freq)
elif field_count > 0:
- y, mth, q, d, h, minute, s = fields
+ y, mth, q, d, h, minute, s, us = fields
subarr, freq = _range_from_fields(year=y, month=mth, quarter=q,
day=d, hour=h, minute=minute,
- second=s, freq=freq)
+ second=s, microsecond=us,
+ freq=freq)
else:
raise ValueError('Not enough parameters to construct '
'Period range')
@@ -604,7 +634,7 @@ def _from_arraylike(cls, data, freq, tz):
base1, _ = _gfc(data.freq)
base2, _ = _gfc(freq)
data = tslib.period_asfreq_arr(data.values, base1,
- base2, 1)
+ base2, 'E')
else:
if freq is None and len(data) > 0:
freq = getattr(data[0], 'freq', None)
@@ -643,9 +673,14 @@ def _box_values(self, values):
def asof_locs(self, where, mask):
"""
+ Parameters
+ ----------
where : array of timestamps
mask : array of booleans where data is not NA
+ Returns
+ -------
+ result : array_like
"""
where_idx = where
if isinstance(where_idx, DatetimeIndex):
@@ -718,14 +753,15 @@ def asfreq(self, freq=None, how='E'):
freq = _freq_mod.get_standard_freq(freq)
- base1, mult1 = _gfc(self.freq)
+ base1, _ = _gfc(self.freq)
base2, mult2 = _gfc(freq)
if mult2 != 1:
raise ValueError('Only mult == 1 supported')
- end = how == 'E'
- new_data = tslib.period_asfreq_arr(self.values, base1, base2, end)
+ hows = {'S': 'S', 'E': 'E', 'START': 'S', 'END': 'E'}
+ new_data = tslib.period_asfreq_arr(self.values, base1, base2,
+ hows[how.upper()])
result = new_data.view(PeriodIndex)
result.name = self.name
@@ -741,6 +777,7 @@ def to_datetime(self, dayfirst=False):
hour = _field_accessor('hour', 5)
minute = _field_accessor('minute', 6)
second = _field_accessor('second', 7)
+ microsecond = _field_accessor('microsecond', 11)
weekofyear = _field_accessor('week', 8)
week = weekofyear
dayofweek = _field_accessor('dayofweek', 10)
@@ -786,10 +823,10 @@ def to_timestamp(self, freq=None, how='start'):
how = _validate_end_alias(how)
if freq is None:
- base, mult = _gfc(self.freq)
+ base, _ = _gfc(self.freq)
freq = _freq_mod.get_to_timestamp_base(base)
- base, mult = _gfc(freq)
+ base, _ = _gfc(freq)
new_data = self.asfreq(freq, how)
new_data = tslib.periodarr_to_dt64arr(new_data.values, base)
@@ -835,7 +872,7 @@ def get_value(self, series, key):
return super(PeriodIndex, self).get_value(series, key)
except (KeyError, IndexError):
try:
- asdt, parsed, reso = parse_time_string(key, self.freq)
+ asdt, _, reso = parse_time_string(key, self.freq)
grp = _freq_mod._infer_period_group(reso)
freqn = _freq_mod._period_group(self.freq)
@@ -876,7 +913,7 @@ def get_loc(self, key):
return self._engine.get_loc(key)
except KeyError:
try:
- asdt, parsed, reso = parse_time_string(key, self.freq)
+ asdt, _, _ = parse_time_string(key, self.freq)
key = asdt
except TypeError:
pass
@@ -1108,16 +1145,23 @@ def _get_ordinal_range(start, end, periods, freq):
return data, freq
-def _range_from_fields(year=None, month=None, quarter=None, day=None,
- hour=None, minute=None, second=None, freq=None):
+def _range_from_fields(year=None, month=None, quarter=None, day=None, hour=None,
+ minute=None, second=None, microsecond=None, freq=None):
+
+ if day is None:
+ day = 1
+
if hour is None:
hour = 0
+
if minute is None:
minute = 0
+
if second is None:
second = 0
- if day is None:
- day = 1
+
+ if microsecond is None:
+ microsecond = 0
ordinals = []
@@ -1134,17 +1178,18 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
year, quarter = _make_field_arrays(year, quarter)
for y, q in zip(year, quarter):
y, m = _quarter_to_myear(y, q, freq)
- val = tslib.period_ordinal(y, m, 1, 1, 1, 1, base)
+ val = tslib.period_ordinal(y, m, 1, 1, 1, 1, 1, base)
ordinals.append(val)
else:
base, mult = _gfc(freq)
if mult != 1:
raise ValueError('Only mult == 1 supported')
- arrays = _make_field_arrays(year, month, day, hour, minute, second)
- for y, mth, d, h, mn, s in zip(*arrays):
- ordinals.append(tslib.period_ordinal(y, mth, d, h, mn, s, base))
-
+ arrays = _make_field_arrays(year, month, day, hour, minute, second,
+ microsecond)
+ for y, mth, d, h, mn, s, us in zip(*arrays):
+ ordinals.append(tslib.period_ordinal(y, mth, d, h, mn, s, us,
+ base))
return np.array(ordinals, dtype=np.int64), freq
@@ -1164,7 +1209,7 @@ def _make_field_arrays(*fields):
def _ordinal_from_fields(year, month, quarter, day, hour, minute,
- second, freq):
+ second, microsecond, freq):
base, mult = _gfc(freq)
if mult != 1:
raise ValueError('Only mult == 1 supported')
@@ -1172,7 +1217,8 @@ def _ordinal_from_fields(year, month, quarter, day, hour, minute,
if quarter is not None:
year, month = _quarter_to_myear(year, quarter, freq)
- return tslib.period_ordinal(year, month, day, hour, minute, second, base)
+ return tslib.period_ordinal(year, month, day, hour, minute, second,
+ microsecond, base)
def _quarter_to_myear(year, quarter, freq):
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index aad831ae48a64..fd078dcdb7da9 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -1,7 +1,6 @@
-from datetime import datetime, time, timedelta
-import sys
-import os
+from datetime import datetime, timedelta
import unittest
+from unittest import TestCase
import nose
@@ -9,13 +8,12 @@
from pandas import Index, DatetimeIndex, date_range, period_range
-from pandas.tseries.frequencies import to_offset, infer_freq
+from pandas.tseries.frequencies import (to_offset, infer_freq, Resolution,
+ get_reso_string)
from pandas.tseries.tools import to_datetime
import pandas.tseries.frequencies as fmod
import pandas.tseries.offsets as offsets
-import pandas.lib as lib
-
def test_to_offset_multiple():
freqstr = '2h30min'
@@ -253,7 +251,30 @@ def test_is_superperiod_subperiod():
assert(fmod.is_superperiod(offsets.Hour(), offsets.Minute()))
assert(fmod.is_subperiod(offsets.Minute(), offsets.Hour()))
+
+class TestResolution(TestCase):
+ def test_resolution(self):
+ resos = 'microsecond', 'second', 'minute', 'hour', 'day'
+
+ for i, r in enumerate(resos):
+ s = Resolution.get_str(i)
+ self.assertEqual(s, r)
+
+ self.assertEqual(Resolution.get_str(None), 'day')
+
+ def test_get_reso_string(self):
+ resos = 'microsecond', 'second', 'minute', 'hour', 'day'
+
+ for i, r in enumerate(resos):
+ s = Resolution.get_str(i)
+ self.assertEqual(s, r)
+ self.assertEqual(get_reso_string(i), s)
+
+ self.assertEqual(Resolution.get_str(None), 'day')
+ self.assertEqual(get_reso_string(None), 'day')
+
+
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 22264a5613922..5119e94d6ea9d 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -22,7 +22,7 @@
import pandas.core.datetools as datetools
import pandas as pd
import numpy as np
-randn = np.random.randn
+from numpy.random import randn
from pandas import Series, TimeSeries, DataFrame
from pandas.util.testing import assert_series_equal, assert_almost_equal
@@ -93,7 +93,7 @@ def test_period_constructor(self):
i4 = Period('2005', freq='M')
i5 = Period('2005', freq='m')
- self.assert_(i1 != i4)
+ self.assertNotEqual(i1, i4)
self.assertEquals(i4, i5)
i1 = Period.now('Q')
@@ -183,9 +183,13 @@ def test_period_constructor(self):
i2 = Period(datetime(2007, 1, 1), freq='M')
self.assertEqual(i1, i2)
- self.assertRaises(ValueError, Period, ordinal=200701)
+ i1 = Period(date(2007, 1, 1), freq='U')
+ i2 = Period(datetime(2007, 1, 1), freq='U')
+ self.assertEqual(i1, i2)
- self.assertRaises(KeyError, Period, '2007-1-1', freq='U')
+ # errors
+ self.assertRaises(ValueError, Period, ordinal=200701)
+ self.assertRaises(KeyError, Period, '2007-1-1', freq='L')
def test_freq_str(self):
i1 = Period('1982', freq='Min')
@@ -200,8 +204,12 @@ def test_repr(self):
def test_strftime(self):
p = Period('2000-1-1 12:34:12', freq='S')
- self.assert_(p.strftime('%Y-%m-%d %H:%M:%S') ==
- '2000-01-01 12:34:12')
+ s = p.strftime('%Y-%m-%d %H:%M:%S')
+ self.assertEqual(s, '2000-01-01 12:34:12')
+
+ p = Period('2001-1-1 12:34:12', freq='U')
+ s = p.strftime('%Y-%m-%d %H:%M:%S.%u')
+ self.assertEqual(s, '2001-01-01 12:34:12.000000')
def test_sub_delta(self):
left, right = Period('2011', freq='A'), Period('2007', freq='A')
@@ -223,13 +231,12 @@ def test_to_timestamp(self):
for a in aliases:
self.assertEquals(end_ts, p.to_timestamp('D', how=a))
- from_lst = ['A', 'Q', 'M', 'W', 'B',
- 'D', 'H', 'Min', 'S']
+ from_lst = 'A', 'Q', 'M', 'W', 'B', 'D', 'H', 'Min', 'S', 'U'
def _ex(p):
return Timestamp((p + 1).start_time.value - 1)
- for i, fcode in enumerate(from_lst):
+ for fcode in from_lst:
p = Period('1982', freq=fcode)
result = p.to_timestamp().to_period(fcode)
self.assertEquals(result, p)
@@ -254,22 +261,26 @@ def _ex(p):
expected = datetime(1985, 12, 31)
self.assertEquals(result, expected)
- expected = datetime(1985, 1, 1)
- result = p.to_timestamp('H', how='start')
- self.assertEquals(result, expected)
- result = p.to_timestamp('T', how='start')
- self.assertEquals(result, expected)
- result = p.to_timestamp('S', how='start')
+ result = p.to_timestamp('U', how='end')
+ expected = datetime(1985, 12, 31, 23, 59, 59, 999999)
self.assertEquals(result, expected)
+ expected = datetime(1985, 1, 1)
+
+ for freq in ('H', 'T', 'S', 'U'):
+ result = p.to_timestamp(freq, how='start')
+ self.assertEquals(result, expected)
+
self.assertRaises(ValueError, p.to_timestamp, '5t')
def test_start_time(self):
- freq_lst = ['A', 'Q', 'M', 'D', 'H', 'T', 'S']
+ freq_lst = 'A', 'Q', 'M', 'D', 'H', 'T', 'S', 'U'
xp = datetime(2012, 1, 1)
+
for f in freq_lst:
p = Period('2012', freq=f)
self.assertEquals(p.start_time, xp)
+
self.assertEquals(Period('2012', freq='B').start_time,
datetime(2011, 12, 30))
self.assertEquals(Period('2012', freq='W').start_time,
@@ -278,8 +289,8 @@ def test_start_time(self):
def test_end_time(self):
p = Period('2012', freq='A')
- def _ex(*args):
- return Timestamp(Timestamp(datetime(*args)).value - 1)
+ def _ex(*args, **kwargs):
+ return Timestamp(Timestamp(datetime(*args, **kwargs)).value - 1)
xp = _ex(2013, 1, 1)
self.assertEquals(xp, p.end_time)
@@ -300,6 +311,18 @@ def _ex(*args):
p = Period('2012', freq='H')
self.assertEquals(p.end_time, xp)
+ xp = _ex(2012, 1, 1, hour=0, minute=1)
+ p = Period('2012', freq='T')
+ self.assertEquals(p.end_time, xp)
+
+ xp = _ex(2012, 1, 1, hour=0, minute=0, second=1)
+ p = Period('2012', freq='S')
+ self.assertEquals(p.end_time, xp)
+
+ xp = _ex(2012, 1, 1, hour=0, minute=0, second=0, microsecond=1)
+ p = Period('2012', freq='U')
+ self.assertEquals(p.end_time, xp)
+
xp = _ex(2012, 1, 2)
self.assertEquals(Period('2012', freq='B').end_time, xp)
@@ -336,7 +359,7 @@ def test_properties_monthly(self):
assert_equal(m_ival_x.quarter, 3)
elif 10 <= x + 1 <= 12:
assert_equal(m_ival_x.quarter, 4)
- assert_equal(m_ival_x.month, x + 1)
+ assert_equal(m_ival_x.month, x + 1)
def test_properties_weekly(self):
# Test properties on Periods with daily frequency.
@@ -409,6 +432,21 @@ def test_properties_secondly(self):
assert_equal(s_date.minute, 0)
assert_equal(s_date.second, 0)
+ def test_properties_microsecondly(self):
+ u_date = Period(freq='U', year=2007, month=1, day=1, hour=0, minute=40,
+ second=5, microsecond=10)
+ #
+ assert_equal(u_date.year, 2007)
+ assert_equal(u_date.quarter, 1)
+ assert_equal(u_date.month, 1)
+ assert_equal(u_date.day, 1)
+ assert_equal(u_date.weekday, 0)
+ assert_equal(u_date.dayofyear, 1)
+ assert_equal(u_date.hour, 0)
+ assert_equal(u_date.minute, 40)
+ assert_equal(u_date.second, 5)
+ assert_equal(u_date.microsecond, 10)
+
def test_pnow(self):
dt = datetime.now()
@@ -436,23 +474,24 @@ def test_constructor_corner(self):
def test_constructor_infer_freq(self):
p = Period('2007-01-01')
- self.assert_(p.freq == 'D')
+ self.assertEqual(p.freq, 'D')
p = Period('2007-01-01 07')
- self.assert_(p.freq == 'H')
+ self.assertEqual(p.freq, 'H')
p = Period('2007-01-01 07:10')
- self.assert_(p.freq == 'T')
+ self.assertEqual(p.freq, 'T')
p = Period('2007-01-01 07:10:15')
- self.assert_(p.freq == 'S')
+ self.assertEqual(p.freq, 'S')
- self.assertRaises(ValueError, Period, '2007-01-01 07:10:15.123456')
+ p = Period('2007-01-01 07:10:15.123456')
+ self.assertEqual(p.freq, 'U')
def test_comparisons(self):
p = Period('2007-01-01')
self.assertEquals(p, p)
- self.assert_(not p == 1)
+ self.assertNotEqual(p, 1)
def noWrap(item):
@@ -500,6 +539,11 @@ def test_conv_annual(self):
hour=0, minute=0, second=0)
ival_A_to_S_end = Period(freq='S', year=2007, month=12, day=31,
hour=23, minute=59, second=59)
+ ival_A_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_A_to_U_end = Period(freq='U', year=2007, month=12, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_AJAN_to_D_end = Period(freq='D', year=2007, month=1, day=31)
ival_AJAN_to_D_start = Period(freq='D', year=2006, month=2, day=1)
@@ -526,6 +570,8 @@ def test_conv_annual(self):
assert_equal(ival_A.asfreq('T', 'E'), ival_A_to_T_end)
assert_equal(ival_A.asfreq('S', 'S'), ival_A_to_S_start)
assert_equal(ival_A.asfreq('S', 'E'), ival_A_to_S_end)
+ assert_equal(ival_A.asfreq('U', 'S'), ival_A_to_U_start)
+ assert_equal(ival_A.asfreq('U', 'E'), ival_A_to_U_end)
assert_equal(ival_AJAN.asfreq('D', 'S'), ival_AJAN_to_D_start)
assert_equal(ival_AJAN.asfreq('D', 'E'), ival_AJAN_to_D_end)
@@ -568,6 +614,11 @@ def test_conv_quarterly(self):
hour=0, minute=0, second=0)
ival_Q_to_S_end = Period(freq='S', year=2007, month=3, day=31,
hour=23, minute=59, second=59)
+ ival_Q_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_Q_to_U_end = Period(freq='U', year=2007, month=3, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_QEJAN_to_D_start = Period(freq='D', year=2006, month=2, day=1)
ival_QEJAN_to_D_end = Period(freq='D', year=2006, month=4, day=30)
@@ -592,6 +643,8 @@ def test_conv_quarterly(self):
assert_equal(ival_Q.asfreq('Min', 'E'), ival_Q_to_T_end)
assert_equal(ival_Q.asfreq('S', 'S'), ival_Q_to_S_start)
assert_equal(ival_Q.asfreq('S', 'E'), ival_Q_to_S_end)
+ assert_equal(ival_Q.asfreq('U', 'S'), ival_Q_to_U_start)
+ assert_equal(ival_Q.asfreq('U', 'E'), ival_Q_to_U_end)
assert_equal(ival_QEJAN.asfreq('D', 'S'), ival_QEJAN_to_D_start)
assert_equal(ival_QEJAN.asfreq('D', 'E'), ival_QEJAN_to_D_end)
@@ -626,6 +679,11 @@ def test_conv_monthly(self):
hour=0, minute=0, second=0)
ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31,
hour=23, minute=59, second=59)
+ ival_M_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_M_to_U_end = Period(freq='U', year=2007, month=1, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_M.asfreq('A'), ival_M_to_A)
assert_equal(ival_M_end_of_year.asfreq('A'), ival_M_to_A)
@@ -644,6 +702,8 @@ def test_conv_monthly(self):
assert_equal(ival_M.asfreq('Min', 'E'), ival_M_to_T_end)
assert_equal(ival_M.asfreq('S', 'S'), ival_M_to_S_start)
assert_equal(ival_M.asfreq('S', 'E'), ival_M_to_S_end)
+ assert_equal(ival_M.asfreq('U', 'S'), ival_M_to_U_start)
+ assert_equal(ival_M.asfreq('U', 'E'), ival_M_to_U_end)
assert_equal(ival_M.asfreq('M'), ival_M)
@@ -715,6 +775,11 @@ def test_conv_weekly(self):
hour=0, minute=0, second=0)
ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7,
hour=23, minute=59, second=59)
+ ival_W_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_W_to_U_end = Period(freq='U', year=2007, month=1, day=7,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_W.asfreq('A'), ival_W_to_A)
assert_equal(ival_W_end_of_year.asfreq('A'),
@@ -754,6 +819,9 @@ def test_conv_weekly(self):
assert_equal(ival_W.asfreq('S', 'S'), ival_W_to_S_start)
assert_equal(ival_W.asfreq('S', 'E'), ival_W_to_S_end)
+ assert_equal(ival_W.asfreq('U', 'S'), ival_W_to_U_start)
+ assert_equal(ival_W.asfreq('U', 'E'), ival_W_to_U_end)
+
assert_equal(ival_W.asfreq('WK'), ival_W)
def test_conv_business(self):
@@ -782,6 +850,11 @@ def test_conv_business(self):
hour=0, minute=0, second=0)
ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1,
hour=23, minute=59, second=59)
+ ival_B_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_B_to_U_end = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_B.asfreq('A'), ival_B_to_A)
assert_equal(ival_B_end_of_year.asfreq('A'), ival_B_to_A)
@@ -800,6 +873,8 @@ def test_conv_business(self):
assert_equal(ival_B.asfreq('Min', 'E'), ival_B_to_T_end)
assert_equal(ival_B.asfreq('S', 'S'), ival_B_to_S_start)
assert_equal(ival_B.asfreq('S', 'E'), ival_B_to_S_end)
+ assert_equal(ival_B.asfreq('U', 'S'), ival_B_to_U_start)
+ assert_equal(ival_B.asfreq('U', 'E'), ival_B_to_U_end)
assert_equal(ival_B.asfreq('B'), ival_B)
@@ -845,6 +920,11 @@ def test_conv_daily(self):
hour=0, minute=0, second=0)
ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1,
hour=23, minute=59, second=59)
+ ival_D_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_D_to_U_end = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_D.asfreq('A'), ival_D_to_A)
@@ -871,12 +951,17 @@ def test_conv_daily(self):
assert_equal(ival_D_sunday.asfreq('B', 'S'), ival_B_friday)
assert_equal(ival_D_sunday.asfreq('B', 'E'), ival_B_monday)
+ assert_equal(ival_D_monday.asfreq('B', 'S'), ival_B_monday)
+ assert_equal(ival_D_monday.asfreq('B', 'E'), ival_B_monday)
+
assert_equal(ival_D.asfreq('H', 'S'), ival_D_to_H_start)
assert_equal(ival_D.asfreq('H', 'E'), ival_D_to_H_end)
assert_equal(ival_D.asfreq('Min', 'S'), ival_D_to_T_start)
assert_equal(ival_D.asfreq('Min', 'E'), ival_D_to_T_end)
assert_equal(ival_D.asfreq('S', 'S'), ival_D_to_S_start)
assert_equal(ival_D.asfreq('S', 'E'), ival_D_to_S_end)
+ assert_equal(ival_D.asfreq('U', 'S'), ival_D_to_U_start)
+ assert_equal(ival_D.asfreq('U', 'E'), ival_D_to_U_end)
assert_equal(ival_D.asfreq('D'), ival_D)
@@ -912,6 +997,10 @@ def test_conv_hourly(self):
hour=0, minute=0, second=0)
ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1,
hour=0, minute=59, second=59)
+ ival_H_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_H_to_U_end = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=59, second=59, microsecond=999999)
assert_equal(ival_H.asfreq('A'), ival_H_to_A)
assert_equal(ival_H_end_of_year.asfreq('A'), ival_H_to_A)
@@ -928,16 +1017,20 @@ def test_conv_hourly(self):
assert_equal(ival_H.asfreq('Min', 'S'), ival_H_to_T_start)
assert_equal(ival_H.asfreq('Min', 'E'), ival_H_to_T_end)
+
assert_equal(ival_H.asfreq('S', 'S'), ival_H_to_S_start)
assert_equal(ival_H.asfreq('S', 'E'), ival_H_to_S_end)
+ assert_equal(ival_H.asfreq('U', 'S'), ival_H_to_U_start)
+ assert_equal(ival_H.asfreq('U', 'E'), ival_H_to_U_end)
+
assert_equal(ival_H.asfreq('H'), ival_H)
def test_conv_minutely(self):
# frequency conversion tests: from Minutely Frequency"
- ival_T = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ ival_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
+ minute=0)
ival_T_end_of_year = Period(freq='Min', year=2007, month=12, day=31,
hour=23, minute=59)
ival_T_end_of_quarter = Period(freq='Min', year=2007, month=3, day=31,
@@ -965,6 +1058,11 @@ def test_conv_minutely(self):
hour=0, minute=0, second=0)
ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1,
hour=0, minute=0, second=59)
+ ival_T_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_T_to_U_end = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=59,
+ microsecond=999999)
assert_equal(ival_T.asfreq('A'), ival_T_to_A)
assert_equal(ival_T_end_of_year.asfreq('A'), ival_T_to_A)
@@ -984,6 +1082,9 @@ def test_conv_minutely(self):
assert_equal(ival_T.asfreq('S', 'S'), ival_T_to_S_start)
assert_equal(ival_T.asfreq('S', 'E'), ival_T_to_S_end)
+ assert_equal(ival_T.asfreq('U', 'S'), ival_T_to_U_start)
+ assert_equal(ival_T.asfreq('U', 'E'), ival_T_to_U_end)
+
assert_equal(ival_T.asfreq('Min'), ival_T)
def test_conv_secondly(self):
@@ -992,21 +1093,32 @@ def test_conv_secondly(self):
ival_S = Period(freq='S', year=2007, month=1, day=1,
hour=0, minute=0, second=0)
ival_S_end_of_year = Period(freq='S', year=2007, month=12, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_quarter = Period(freq='S', year=2007, month=3, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_month = Period(freq='S', year=2007, month=1, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_week = Period(freq='S', year=2007, month=1, day=7,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_day = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_bus = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_hour = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=59, second=59)
+ hour=0, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_minute = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=59)
+ hour=0, minute=0, second=59,
+ microsecond=999999)
+ ival_S_end_of_second = Period(freq='S', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0,
+ microsecond=999999)
ival_S_to_A = Period(freq='A', year=2007)
ival_S_to_Q = Period(freq='Q', year=2007, quarter=1)
@@ -1014,30 +1126,114 @@ def test_conv_secondly(self):
ival_S_to_W = Period(freq='WK', year=2007, month=1, day=7)
ival_S_to_D = Period(freq='D', year=2007, month=1, day=1)
ival_S_to_B = Period(freq='B', year=2007, month=1, day=1)
- ival_S_to_H = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
- ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ ival_S_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0)
+ ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
+ minute=0)
+ ival_S_to_U = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0, microsecond=999999)
+
+ ival_S_to_U_start = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0, microsecond=0)
+ ival_S_to_U_end = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0, microsecond=999999)
assert_equal(ival_S.asfreq('A'), ival_S_to_A)
assert_equal(ival_S_end_of_year.asfreq('A'), ival_S_to_A)
+
assert_equal(ival_S.asfreq('Q'), ival_S_to_Q)
assert_equal(ival_S_end_of_quarter.asfreq('Q'), ival_S_to_Q)
+
assert_equal(ival_S.asfreq('M'), ival_S_to_M)
assert_equal(ival_S_end_of_month.asfreq('M'), ival_S_to_M)
+
assert_equal(ival_S.asfreq('WK'), ival_S_to_W)
assert_equal(ival_S_end_of_week.asfreq('WK'), ival_S_to_W)
+
assert_equal(ival_S.asfreq('D'), ival_S_to_D)
assert_equal(ival_S_end_of_day.asfreq('D'), ival_S_to_D)
+
assert_equal(ival_S.asfreq('B'), ival_S_to_B)
assert_equal(ival_S_end_of_bus.asfreq('B'), ival_S_to_B)
+
assert_equal(ival_S.asfreq('H'), ival_S_to_H)
assert_equal(ival_S_end_of_hour.asfreq('H'), ival_S_to_H)
+
assert_equal(ival_S.asfreq('Min'), ival_S_to_T)
assert_equal(ival_S_end_of_minute.asfreq('Min'), ival_S_to_T)
+ assert_equal(ival_S.asfreq('U'), ival_S_to_U)
+ assert_equal(ival_S_end_of_second.asfreq('U'), ival_S_to_U)
+
+ assert_equal(ival_S.asfreq('U', 'S'), ival_S_to_U_start)
+ assert_equal(ival_S.asfreq('U', 'E'), ival_S_to_U_end)
+
assert_equal(ival_S.asfreq('S'), ival_S)
+ def test_conv_microsecondly(self):
+ ival_U = Period(freq='U', year=2007, month=1, day=1, hour=0, minute=0,
+ second=0, microsecond=0)
+ ival_U_end_of_year = Period(freq='U', year=2007, month=12, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_quarter = Period(freq='U', year=2007, month=3, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_month = Period(freq='U', year=2007, month=1, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_week = Period(freq='U', year=2007, month=1, day=7,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_day = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_bus = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_hour = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_minute = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=59,
+ microsecond=999999)
+ ival_U_end_of_second = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0,
+ microsecond=999999)
+
+ ival_U_to_A = Period(freq='A', year=2007)
+ ival_U_to_Q = Period(freq='Q', year=2007, quarter=1)
+ ival_U_to_M = Period(freq='M', year=2007, month=1)
+ ival_U_to_W = Period(freq='WK', year=2007, month=1, day=7)
+ ival_U_to_D = Period(freq='D', year=2007, month=1, day=1)
+ ival_U_to_B = Period(freq='B', year=2007, month=1, day=1)
+ ival_U_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0)
+ ival_U_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
+ minute=0)
+ ival_U_to_S = Period(freq='S', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0)
+
+ assert_equal(ival_U.asfreq('A'), ival_U_to_A)
+ assert_equal(ival_U.asfreq('Q'), ival_U_to_Q)
+ assert_equal(ival_U.asfreq('M'), ival_U_to_M)
+ assert_equal(ival_U.asfreq('WK'), ival_U_to_W)
+ assert_equal(ival_U.asfreq('D'), ival_U_to_D)
+ assert_equal(ival_U.asfreq('B'), ival_U_to_B)
+ assert_equal(ival_U.asfreq('H', 'S'), ival_U_to_H)
+ assert_equal(ival_U.asfreq('Min', 'S'), ival_U_to_T)
+ assert_equal(ival_U.asfreq('S'), ival_U_to_S)
+
+ assert_equal(ival_U_end_of_year.asfreq('A'), ival_U_to_A)
+ assert_equal(ival_U_end_of_quarter.asfreq('Q'), ival_U_to_Q)
+ assert_equal(ival_U_end_of_month.asfreq('M'), ival_U_to_M)
+ assert_equal(ival_U_end_of_week.asfreq('WK'), ival_U_to_W)
+ assert_equal(ival_U_end_of_day.asfreq('D'), ival_U_to_D)
+ assert_equal(ival_U_end_of_bus.asfreq('B'), ival_U_to_B)
+ assert_equal(ival_U_end_of_hour.asfreq('H', 'S'), ival_U_to_H)
+ assert_equal(ival_U_end_of_minute.asfreq('Min', 'S'), ival_U_to_T)
+ assert_equal(ival_U_end_of_second.asfreq('S'), ival_U_to_S)
+
+ assert_equal(ival_U.asfreq('U'), ival_U)
+
class TestPeriodIndex(TestCase):
def __init__(self, *args, **kwds):
@@ -1074,9 +1270,8 @@ def test_constructor_field_arrays(self):
expected = period_range('1990Q3', '2009Q2', freq='Q-DEC')
self.assert_(index.equals(expected))
- self.assertRaises(
- ValueError, PeriodIndex, year=years, quarter=quarters,
- freq='2Q-DEC')
+ self.assertRaises(ValueError, PeriodIndex, year=years,
+ quarter=quarters, freq='2Q-DEC')
index = PeriodIndex(year=years, quarter=quarters)
self.assert_(index.equals(expected))
@@ -1097,9 +1292,10 @@ def test_constructor_field_arrays(self):
self.assert_(idx.equals(exp))
def test_constructor_U(self):
- # U was used as undefined period
- self.assertRaises(KeyError, period_range, '2007-1-1', periods=500,
- freq='U')
+ pass
+ # # U was used as undefined period
+ # self.assertRaises(KeyError, period_range, '2007-1-1', periods=500,
+ # freq='U')
def test_constructor_arrays_negative_year(self):
years = np.arange(1960, 2000).repeat(4)
@@ -1217,8 +1413,8 @@ def test_sub(self):
self.assert_(result.equals(exp))
def test_periods_number_check(self):
- self.assertRaises(
- ValueError, period_range, '2011-1-1', '2012-1-1', 'B')
+ self.assertRaises(ValueError, period_range, '2011-1-1', '2012-1-1',
+ 'B')
def test_to_timestamp(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
@@ -1252,6 +1448,12 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = _get_with_delta(delta)
self.assert_(result.index.equals(exp_index))
+ result = series.to_timestamp('U', 'end')
+ delta = timedelta(hours=23, minutes=59, seconds=59,
+ microseconds=999999)
+ exp_index = _get_with_delta(delta)
+ self.assert_(result.index.equals(exp_index))
+
self.assertRaises(ValueError, index.to_timestamp, '5t')
index = PeriodIndex(freq='H', start='1/1/2001', end='1/2/2001')
@@ -1361,6 +1563,12 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = _get_with_delta(delta)
self.assert_(result.index.equals(exp_index))
+ result = df.to_timestamp('U', 'end')
+ delta = timedelta(hours=23, minutes=59, seconds=59,
+ microseconds=999999)
+ exp_index = _get_with_delta(delta)
+ self.assert_(result.index.equals(exp_index))
+
# columns
df = df.T
@@ -1388,6 +1596,12 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = _get_with_delta(delta)
self.assert_(result.columns.equals(exp_index))
+ result = df.to_timestamp('U', 'end', axis=1)
+ delta = timedelta(hours=23, minutes=59, seconds=59,
+ microseconds=999999)
+ exp_index = _get_with_delta(delta)
+ self.assert_(result.columns.equals(exp_index))
+
# invalid axis
self.assertRaises(ValueError, df.to_timestamp, axis=2)
@@ -1435,6 +1649,11 @@ def test_constructor(self):
pi = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 23:59:59')
assert_equal(len(pi), 24 * 60 * 60)
+ pi = self.assertRaises(MemoryError, PeriodIndex, freq='U',
+ start='1/1/2001',
+ end='1/1/2001 23:59:59.999999')
+ # assert_equal(len(pi), 24 * 60 * 60 * 1000000)
+
start = Period('02-Apr-2005', 'B')
i1 = PeriodIndex(start=start, periods=20)
assert_equal(len(i1), 20)
@@ -1470,8 +1689,8 @@ def test_constructor(self):
try:
PeriodIndex(start=start)
- raise AssertionError(
- 'Must specify periods if missing start or end')
+ msg = 'Must specify periods if missing start or end'
+ raise AssertionError(msg)
except ValueError:
pass
@@ -1524,6 +1743,12 @@ def test_shift(self):
assert_equal(len(pi1), len(pi2))
assert_equal(pi1.shift(-1).values, pi2.values)
+ # fails on cpcloud's machine due to not enough memory
+ # pi1 = PeriodIndex(freq='U', start='1/1/2001', end='12/1/2009')
+ # pi2 = PeriodIndex(freq='U', start='12/31/2000', end='11/30/2009')
+ # assert_equal(len(pi1), len(pi2))
+ # assert_equal(pi1.shift(-1).values, pi2.values)
+
def test_asfreq(self):
pi1 = PeriodIndex(freq='A', start='1/1/2001', end='1/1/2001')
pi2 = PeriodIndex(freq='Q', start='1/1/2001', end='1/1/2001')
@@ -1532,6 +1757,8 @@ def test_asfreq(self):
pi5 = PeriodIndex(freq='H', start='1/1/2001', end='1/1/2001 00:00')
pi6 = PeriodIndex(freq='Min', start='1/1/2001', end='1/1/2001 00:00')
pi7 = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 00:00:00')
+ pi8 = PeriodIndex(freq='U', start='1/1/2001',
+ end='1/1/2001 00:00:00.000000')
self.assertEquals(pi1.asfreq('Q', 'S'), pi2)
self.assertEquals(pi1.asfreq('Q', 's'), pi2)
@@ -1540,6 +1767,7 @@ def test_asfreq(self):
self.assertEquals(pi1.asfreq('H', 'beGIN'), pi5)
self.assertEquals(pi1.asfreq('Min', 'S'), pi6)
self.assertEquals(pi1.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi1.asfreq('U', 'S'), pi8)
self.assertEquals(pi2.asfreq('A', 'S'), pi1)
self.assertEquals(pi2.asfreq('M', 'S'), pi3)
@@ -1547,6 +1775,7 @@ def test_asfreq(self):
self.assertEquals(pi2.asfreq('H', 'S'), pi5)
self.assertEquals(pi2.asfreq('Min', 'S'), pi6)
self.assertEquals(pi2.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi2.asfreq('U', 'S'), pi8)
self.assertEquals(pi3.asfreq('A', 'S'), pi1)
self.assertEquals(pi3.asfreq('Q', 'S'), pi2)
@@ -1554,6 +1783,7 @@ def test_asfreq(self):
self.assertEquals(pi3.asfreq('H', 'S'), pi5)
self.assertEquals(pi3.asfreq('Min', 'S'), pi6)
self.assertEquals(pi3.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi3.asfreq('U', 'S'), pi8)
self.assertEquals(pi4.asfreq('A', 'S'), pi1)
self.assertEquals(pi4.asfreq('Q', 'S'), pi2)
@@ -1561,6 +1791,7 @@ def test_asfreq(self):
self.assertEquals(pi4.asfreq('H', 'S'), pi5)
self.assertEquals(pi4.asfreq('Min', 'S'), pi6)
self.assertEquals(pi4.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi4.asfreq('U', 'S'), pi8)
self.assertEquals(pi5.asfreq('A', 'S'), pi1)
self.assertEquals(pi5.asfreq('Q', 'S'), pi2)
@@ -1568,6 +1799,7 @@ def test_asfreq(self):
self.assertEquals(pi5.asfreq('D', 'S'), pi4)
self.assertEquals(pi5.asfreq('Min', 'S'), pi6)
self.assertEquals(pi5.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi5.asfreq('U', 'S'), pi8)
self.assertEquals(pi6.asfreq('A', 'S'), pi1)
self.assertEquals(pi6.asfreq('Q', 'S'), pi2)
@@ -1575,6 +1807,7 @@ def test_asfreq(self):
self.assertEquals(pi6.asfreq('D', 'S'), pi4)
self.assertEquals(pi6.asfreq('H', 'S'), pi5)
self.assertEquals(pi6.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi6.asfreq('U', 'S'), pi8)
self.assertEquals(pi7.asfreq('A', 'S'), pi1)
self.assertEquals(pi7.asfreq('Q', 'S'), pi2)
@@ -1582,6 +1815,15 @@ def test_asfreq(self):
self.assertEquals(pi7.asfreq('D', 'S'), pi4)
self.assertEquals(pi7.asfreq('H', 'S'), pi5)
self.assertEquals(pi7.asfreq('Min', 'S'), pi6)
+ self.assertEquals(pi7.asfreq('U', 'S'), pi8)
+
+ self.assertEquals(pi8.asfreq('A', 'S'), pi1)
+ self.assertEquals(pi8.asfreq('Q', 'S'), pi2)
+ self.assertEquals(pi8.asfreq('M', 'S'), pi3)
+ self.assertEquals(pi8.asfreq('D', 'S'), pi4)
+ self.assertEquals(pi8.asfreq('H', 'S'), pi5)
+ self.assertEquals(pi8.asfreq('Min', 'S'), pi6)
+ self.assertEquals(pi8.asfreq('S', 'S'), pi7)
self.assertRaises(ValueError, pi7.asfreq, 'T', 'foo')
self.assertRaises(ValueError, pi1.asfreq, '5t')
@@ -1621,11 +1863,17 @@ def test_badinput(self):
def test_negative_ordinals(self):
p = Period(ordinal=-1000, freq='A')
+ self.assertLess(p.ordinal, 0)
+ self.assertEqual(p.ordinal, -1000)
p = Period(ordinal=0, freq='A')
+ self.assertEqual(p.ordinal, 0)
idx = PeriodIndex(ordinal=[-1, 0, 1], freq='A')
+ assert_equal(idx.values, np.array([-1, 0, 1]))
+
idx = PeriodIndex(ordinal=np.array([-1, 0, 1]), freq='A')
+ assert_equal(idx.values, np.array([-1, 0, 1]))
def test_dti_to_period(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
@@ -1747,7 +1995,7 @@ def test_joins(self):
joined = index.join(index[:-5], how=kind)
self.assert_(isinstance(joined, PeriodIndex))
- self.assert_(joined.freq == index.freq)
+ self.assertEqual(joined.freq, index.freq)
def test_align_series(self):
rng = period_range('1/1/2000', '1/1/2010', freq='A')
@@ -1840,17 +2088,21 @@ def test_fields(self):
self._check_all_fields(pi)
pi = PeriodIndex(freq='S', start='12/31/2001 00:00:00',
- end='12/31/2001 00:05:00')
+ end='12/31/2001 00:00:02')
+ self._check_all_fields(pi)
+
+ pi = PeriodIndex(freq='U', start='12/31/2001 00:00:00.000000',
+ end='12/31/2001 00:00:00.000002')
self._check_all_fields(pi)
end_intv = Period('2006-12-31', 'W')
- i1 = PeriodIndex(end=end_intv, periods=10)
+ pi = PeriodIndex(end=end_intv, periods=10)
self._check_all_fields(pi)
def _check_all_fields(self, periodindex):
fields = ['year', 'month', 'day', 'hour', 'minute',
'second', 'weekofyear', 'week', 'dayofweek',
- 'weekday', 'dayofyear', 'quarter', 'qyear']
+ 'weekday', 'dayofyear', 'quarter', 'qyear', 'microsecond']
periods = list(periodindex)
@@ -1977,13 +2229,16 @@ def test_minutely(self):
def test_secondly(self):
self._check_freq('S', '1970-01-01')
+ def test_microsecondly(self):
+ self._check_freq('U', '1970-01-01')
+
def _check_freq(self, freq, base_date):
rng = PeriodIndex(start=base_date, periods=10, freq=freq)
exp = np.arange(10, dtype=np.int64)
self.assert_(np.array_equal(rng.values, exp))
def test_negone_ordinals(self):
- freqs = ['A', 'M', 'Q', 'D', 'H', 'T', 'S']
+ freqs = ['A', 'M', 'Q', 'D', 'H', 'T', 'S', 'U']
period = Period(ordinal=-1, freq='D')
for freq in freqs:
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index f970ecc406c28..9bc58381346fe 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1,15 +1,17 @@
# cython: profile=False
cimport numpy as np
-from numpy cimport (int32_t, int64_t, import_array, ndarray,
+from numpy cimport (int32_t as i4, import_array, ndarray,
NPY_INT64, NPY_DATETIME)
import numpy as np
+ctypedef int64_t i8
+
from cpython cimport *
# Cython < 0.17 doesn't have this in cpython
-cdef extern from "Python.h":
- cdef PyTypeObject *Py_TYPE(object)
+# cdef extern from "Python.h":
+ # cdef int PyObject_TypeCheck(PyObject*, PyTypeObject*)
from libc.stdlib cimport free
@@ -37,7 +39,7 @@ PyDateTime_IMPORT
# in numpy 1.7, will prob need the following:
# numpy_pydatetime_import
-cdef int64_t NPY_NAT = util.get_nat()
+cdef i8 NPY_NAT = util.get_nat()
try:
@@ -45,7 +47,8 @@ try:
except NameError: # py3
basestring = str
-def ints_to_pydatetime(ndarray[int64_t] arr, tz=None):
+
+cpdef ndarray[object] ints_to_pydatetime(ndarray[i8] arr, tz=None):
cdef:
Py_ssize_t i, n = len(arr)
pandas_datetimestruct dts
@@ -84,10 +87,15 @@ def ints_to_pydatetime(ndarray[int64_t] arr, tz=None):
return result
+
from dateutil.tz import tzlocal
-def _is_tzlocal(tz):
- return isinstance(tz, tzlocal)
+
+cpdef bint _is_tzlocal(object tz):
+ cdef bint isa = PyObject_TypeCheck(tz, <PyTypeObject*> tzlocal)
+ return isa
+
+
# Python front end to C extension type _Timestamp
# This serves as the box for datetime64
@@ -295,9 +303,10 @@ class NaTType(_NaT):
def toordinal(self):
return -1
-fields = ['year', 'quarter', 'month', 'day', 'hour',
- 'minute', 'second', 'microsecond', 'nanosecond',
- 'week', 'dayofyear']
+cdef tuple fields = ('year', 'quarter', 'month', 'day', 'hour',
+ 'minute', 'second', 'microsecond', 'nanosecond',
+ 'week', 'dayofyear')
+
for field in fields:
prop = property(fget=lambda self: -1)
setattr(NaTType, field, prop)
@@ -307,19 +316,24 @@ NaT = NaTType()
iNaT = util.get_nat()
-cdef _tz_format(object obj, object zone):
+
+cdef object _tz_format(object obj, object zone):
try:
return obj.strftime(' %%Z, tz=%s' % zone)
except:
return ', tz=%s' % zone
-def is_timestamp_array(ndarray[object] values):
+
+cpdef bint is_timestamp_array(ndarray[object] values):
cdef int i, n = len(values)
- if n == 0:
+
+ if not n:
return False
+
for i in range(n):
if not is_timestamp(values[i]):
return False
+
return True
@@ -327,10 +341,13 @@ cpdef object get_value_box(ndarray arr, object loc):
cdef:
Py_ssize_t i, sz
void* data_ptr
+
if util.is_float_object(loc):
casted = int(loc)
+
if casted == loc:
loc = casted
+
i = <Py_ssize_t> loc
sz = np.PyArray_SIZE(arr)
@@ -348,23 +365,27 @@ cpdef object get_value_box(ndarray arr, object loc):
#----------------------------------------------------------------------
# Frequency inference
-def unique_deltas(ndarray[int64_t] arr):
+cpdef ndarray[i8] unique_deltas(ndarray[i8] arr):
cdef:
Py_ssize_t i, n = len(arr)
- int64_t val
+ i8 val
khiter_t k
kh_int64_t *table
int ret = 0
list uniques = []
+ ndarray[i8] result
table = kh_init_int64()
kh_resize_int64(table, 10)
+
for i in range(n - 1):
val = arr[i + 1] - arr[i]
k = kh_get_int64(table, val)
+
if k == table.n_buckets:
kh_put_int64(table, val, &ret)
uniques.append(val)
+
kh_destroy_int64(table)
result = np.array(uniques, dtype=np.int64)
@@ -372,14 +393,14 @@ def unique_deltas(ndarray[int64_t] arr):
return result
-cdef inline bint _is_multiple(int64_t us, int64_t mult):
+cdef inline bint _is_multiple(i8 us, i8 mult):
return us % mult == 0
def apply_offset(ndarray[object] values, object offset):
cdef:
Py_ssize_t i, n = len(values)
- ndarray[int64_t] new_values
+ ndarray[i8] new_values
object boxed
result = np.empty(n, dtype='M8[ns]')
@@ -393,7 +414,7 @@ def apply_offset(ndarray[object] values, object offset):
# shadows the python class, where we do any heavy lifting.
cdef class _Timestamp(datetime):
cdef readonly:
- int64_t value, nanosecond
+ i8 value, nanosecond
object offset # frequency reference
def __hash__(self):
@@ -523,7 +544,8 @@ cdef PyTypeObject* ts_type = <PyTypeObject*> Timestamp
cdef inline bint is_timestamp(object o):
- return Py_TYPE(o) == ts_type # isinstance(o, Timestamp)
+ cdef bint isa = PyObject_TypeCheck(o, ts_type)
+ return isa
cdef class _NaT(_Timestamp):
@@ -546,9 +568,7 @@ cdef class _NaT(_Timestamp):
return False
-
-
-def _delta_to_nanoseconds(delta):
+cpdef i8 _delta_to_nanoseconds(delta):
try:
delta = delta.delta
except:
@@ -562,19 +582,21 @@ def _delta_to_nanoseconds(delta):
cdef class _TSObject:
cdef:
pandas_datetimestruct dts # pandas_datetimestruct
- int64_t value # numpy dt64
+ i8 value # numpy dt64
object tzinfo
property value:
def __get__(self):
return self.value
-cpdef _get_utcoffset(tzinfo, obj):
+
+cpdef object _get_utcoffset(object tzinfo, object obj):
try:
return tzinfo._utcoffset
except AttributeError:
return tzinfo.utcoffset(obj)
+
# helper to extract datetime and int64 from several different possibilities
cdef convert_to_tsobject(object ts, object tz):
"""
@@ -660,6 +682,7 @@ cdef convert_to_tsobject(object ts, object tz):
return obj
+
cdef inline void _localize_tso(_TSObject obj, object tz):
if _is_utc(tz):
obj.tzinfo = tz
@@ -689,28 +712,27 @@ cdef inline void _localize_tso(_TSObject obj, object tz):
obj.tzinfo = tz._tzinfos[inf]
-def get_timezone(tz):
+cpdef object get_timezone(object tz):
return _get_zone(tz)
+
cdef inline bint _is_utc(object tz):
return tz is UTC or isinstance(tz, _du_utc)
+
cdef inline object _get_zone(object tz):
if _is_utc(tz):
return 'UTC'
else:
try:
- zone = tz.zone
- if zone is None:
- return tz
- return zone
+ return tz.zone
except AttributeError:
return tz
-# cdef int64_t _NS_LOWER_BOUND = -9223285636854775809LL
-# cdef int64_t _NS_UPPER_BOUND = -9223372036854775807LL
+# cdef i8 _NS_LOWER_BOUND = -9223285636854775809LL
+# cdef i8 _NS_UPPER_BOUND = -9223372036854775807LL
-cdef inline _check_dts_bounds(int64_t value, pandas_datetimestruct *dts):
+cdef inline _check_dts_bounds(i8 value, pandas_datetimestruct *dts):
cdef pandas_datetimestruct dts2
if dts.year <= 1677 or dts.year >= 2262:
pandas_datetime_to_datetimestruct(value, PANDAS_FR_ns, &dts2)
@@ -721,27 +743,19 @@ cdef inline _check_dts_bounds(int64_t value, pandas_datetimestruct *dts):
raise ValueError('Out of bounds nanosecond timestamp: %s' % fmt)
-# elif isinstance(ts, _Timestamp):
-# tmp = ts
-# obj.value = (<_Timestamp> ts).value
-# obj.dtval =
-# elif isinstance(ts, object):
-# # If all else fails
-# obj.value = _dtlike_to_datetime64(ts, &obj.dts)
-# obj.dtval = _dts_to_pydatetime(&obj.dts)
-def datetime_to_datetime64(ndarray[object] values):
+cpdef tuple datetime_to_datetime64(ndarray[object] values):
cdef:
Py_ssize_t i, n = len(values)
object val, inferred_tz = None
- ndarray[int64_t] iresult
pandas_datetimestruct dts
_TSObject _ts
+ ndarray result = np.empty(n, dtype='M8[ns]')
+ ndarray[i8] iresult = result.view('i8')
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
for i in range(n):
val = values[i]
+
if util._checknull(val):
iresult[i] = iNaT
elif PyDateTime_Check(val):
@@ -766,12 +780,12 @@ def datetime_to_datetime64(ndarray[object] values):
return result, inferred_tz
-def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
- format=None, utc=None):
+cpdef ndarray array_to_datetime(ndarray[object] values, raise_=False,
+ dayfirst=False, format=None, utc=None):
cdef:
Py_ssize_t i, n = len(values)
object val
- ndarray[int64_t] iresult
+ ndarray[i8] iresult
ndarray[object] oresult
pandas_datetimestruct dts
bint utc_convert = bool(utc)
@@ -849,6 +863,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
return oresult
+
cdef inline _get_datetime64_nanos(object val):
cdef:
pandas_datetimestruct dts
@@ -868,17 +883,16 @@ cdef inline _get_datetime64_nanos(object val):
return ival
-def cast_to_nanoseconds(ndarray arr):
+cpdef ndarray[i8] cast_to_nanoseconds(ndarray arr):
cdef:
Py_ssize_t i, n = arr.size
- ndarray[int64_t] ivalues, iresult
+ ndarray[i8] ivalues, iresult
+ ndarray result
PANDAS_DATETIMEUNIT unit
pandas_datetimestruct dts
-
- shape = (<object> arr).shape
+ tuple shape = (<object> arr).shape
ivalues = arr.view(np.int64).ravel()
-
result = np.empty(shape, dtype='M8[ns]')
iresult = result.ravel().view(np.int64)
@@ -896,23 +910,24 @@ def cast_to_nanoseconds(ndarray arr):
# Conversion routines
-def pydt_to_i8(object pydt):
+cpdef i8 pydt_to_i8(object pydt):
'''
Convert to int64 representation compatible with numpy datetime64; converts
to UTC
'''
cdef:
- _TSObject ts
+ _TSObject ts = convert_to_tsobject(pydt, None)
+ i8 value = ts.value
+
+ return value
- ts = convert_to_tsobject(pydt, None)
- return ts.value
-def i8_to_pydt(int64_t i8, object tzinfo = None):
+def i8_to_pydt(i8 i, object tzinfo = None):
'''
Inverse of pydt_to_i8
'''
- return Timestamp(i8)
+ return Timestamp(i)
#----------------------------------------------------------------------
# time zone conversion helpers
@@ -925,11 +940,12 @@ try:
except:
have_pytz = False
-def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
+
+cpdef ndarray[i8] tz_convert(ndarray[i8] vals, object tz1, object tz2):
cdef:
- ndarray[int64_t] utc_dates, result, trans, deltas
+ ndarray[i8] utc_dates, result, trans, deltas
Py_ssize_t i, pos, n = len(vals)
- int64_t v, offset
+ i8 v, offset
pandas_datetimestruct dts
if not have_pytz:
@@ -1000,11 +1016,12 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
return result
-def tz_convert_single(int64_t val, object tz1, object tz2):
+
+cpdef i8 tz_convert_single(i8 val, object tz1, object tz2):
cdef:
- ndarray[int64_t] trans, deltas
+ ndarray[i8] trans, deltas
Py_ssize_t pos
- int64_t v, offset, utc_date
+ i8 v, offset, utc_date
pandas_datetimestruct dts
if not have_pytz:
@@ -1050,6 +1067,7 @@ def tz_convert_single(int64_t val, object tz1, object tz2):
trans_cache = {}
utc_offset_cache = {}
+
def _get_transitions(tz):
"""
Get UTC times of DST transitions
@@ -1074,6 +1092,7 @@ def _get_transitions(tz):
trans_cache[tz] = arr
return trans_cache[tz]
+
def _get_deltas(tz):
"""
Get UTC offsets in microseconds corresponding to DST transitions
@@ -1095,30 +1114,31 @@ def _get_deltas(tz):
return utc_offset_cache[tz]
-cdef double total_seconds(object td): # Python 2.6 compat
+
+cdef i8 total_seconds(object td): # Python 2.6 compat
return ((td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) //
10**6)
-def tot_seconds(td):
+
+cpdef i8 tot_seconds(object td):
return total_seconds(td)
-cpdef ndarray _unbox_utcoffsets(object transinfo):
- cdef:
- Py_ssize_t i, sz
- ndarray[int64_t] arr
- sz = len(transinfo)
- arr = np.empty(sz, dtype='i8')
+cpdef ndarray[i8] _unbox_utcoffsets(object transinfo):
+ cdef:
+ Py_ssize_t i, sz = len(transinfo)
+ ndarray[i8] arr = np.empty(sz, dtype='i8')
+ i8 billion = 1000000000
for i in range(sz):
- arr[i] = int(total_seconds(transinfo[i][0])) * 1000000000
+ arr[i] = total_seconds(transinfo[i][0]) * billion
return arr
@cython.boundscheck(False)
@cython.wraparound(False)
-def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
+cpdef ndarray[i8] tz_localize_to_utc(ndarray[i8] vals, object tz):
"""
Localize tzinfo-naive DateRange to given time zone (using pytz). If
there are ambiguities in the values, raise AmbiguousTimeError.
@@ -1128,11 +1148,11 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
localized : DatetimeIndex
"""
cdef:
- ndarray[int64_t] trans, deltas, idx_shifted
+ ndarray[i8] trans, deltas, idx_shifted
Py_ssize_t i, idx, pos, ntrans, n = len(vals)
- int64_t *tdata
- int64_t v, left, right
- ndarray[int64_t] result, result_a, result_b
+ i8 *tdata
+ i8 v, left, right
+ ndarray[i8] result, result_a, result_b
pandas_datetimestruct dts
# Vectorized version of DstTzInfo.localize
@@ -1158,7 +1178,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
trans = _get_transitions(tz) # transition dates
deltas = _get_deltas(tz) # utc offsets
- tdata = <int64_t*> trans.data
+ tdata = <i8*> trans.data
ntrans = len(trans)
result_a = np.empty(n, dtype=np.int64)
@@ -1209,7 +1229,8 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
return result
-cdef _ensure_int64(object arr):
+
+cdef ndarray[i8] _ensure_int64(object arr):
if util.is_array(arr):
if (<ndarray> arr).descr.type_num == NPY_INT64:
return arr
@@ -1219,7 +1240,7 @@ cdef _ensure_int64(object arr):
return np.array(arr, dtype=np.int64)
-cdef inline bisect_right_i8(int64_t *data, int64_t val, Py_ssize_t n):
+cdef inline Py_ssize_t bisect_right_i8(i8 *data, i8 val, Py_ssize_t n):
cdef Py_ssize_t pivot, left = 0, right = n
# edge cases
@@ -1242,8 +1263,7 @@ cdef inline bisect_right_i8(int64_t *data, int64_t val, Py_ssize_t n):
# Accessors
#----------------------------------------------------------------------
-
-def build_field_sarray(ndarray[int64_t] dtindex):
+def build_field_sarray(ndarray[i8] dtindex):
'''
Datetime as int64 representation to a structured array of fields
'''
@@ -1251,7 +1271,7 @@ def build_field_sarray(ndarray[int64_t] dtindex):
Py_ssize_t i, count = 0
int isleap
pandas_datetimestruct dts
- ndarray[int32_t] years, months, days, hours, minutes, seconds, mus
+ ndarray[i4] years, months, days, hours, minutes, seconds, mus
count = len(dtindex)
@@ -1285,26 +1305,29 @@ def build_field_sarray(ndarray[int64_t] dtindex):
return out
-def get_time_micros(ndarray[int64_t] dtindex):
+
+cpdef ndarray[i8] get_time_micros(ndarray[i8] dtindex):
'''
Datetime as int64 representation to a structured array of fields
'''
cdef:
Py_ssize_t i, n = len(dtindex)
pandas_datetimestruct dts
- ndarray[int64_t] micros
-
- micros = np.empty(n, dtype=np.int64)
+ ndarray[i8] micros = np.empty(n, dtype=np.int64)
+ i8 secs_per_min = 60
+ i8 secs_per_hour = secs_per_min * secs_per_min
+ i8 us_per_sec = 1000000
for i in range(n):
pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
- micros[i] = 1000000LL * (dts.hour * 60 * 60 +
- 60 * dts.min + dts.sec) + dts.us
+ micros[i] = us_per_sec * (dts.hour * secs_per_hour + secs_per_min *
+ dts.min + dts.sec) + dts.us
return micros
+
@cython.wraparound(False)
-def get_date_field(ndarray[int64_t] dtindex, object field):
+def get_date_field(ndarray[i8] dtindex, object field):
'''
Given a int64-based datetime index, extract the year, month, etc.,
field and return an array of these values.
@@ -1312,8 +1335,8 @@ def get_date_field(ndarray[int64_t] dtindex, object field):
cdef:
_TSObject ts
Py_ssize_t i, count = 0
- ndarray[int32_t] out
- ndarray[int32_t, ndim=2] _month_offset
+ ndarray[i8] out
+ ndarray[i4, ndim=2] _month_offset
int isleap
pandas_datetimestruct dts
@@ -1323,11 +1346,13 @@ def get_date_field(ndarray[int64_t] dtindex, object field):
dtype=np.int32 )
count = len(dtindex)
- out = np.empty(count, dtype='i4')
+ out = np.empty(count, dtype='i8')
if field == 'Y':
for i in range(count):
- if dtindex[i] == NPY_NAT: out[i] = -1; continue
+ if dtindex[i] == NPY_NAT:
+ out[i] = -1
+ continue
pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
out[i] = dts.year
@@ -1426,19 +1451,20 @@ def get_date_field(ndarray[int64_t] dtindex, object field):
raise ValueError("Field %s not supported" % field)
-cdef inline int m8_weekday(int64_t val):
+cdef inline int m8_weekday(i8 val):
ts = convert_to_tsobject(val, None)
return ts_dayofweek(ts)
-cdef int64_t DAY_NS = 86400000000000LL
+cdef i8 DAY_NS = 86400000000000LL
-def date_normalize(ndarray[int64_t] stamps, tz=None):
+
+cpdef ndarray[i8] date_normalize(ndarray[i8] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
pandas_datetimestruct dts
_TSObject tso
- ndarray[int64_t] result = np.empty(n, dtype=np.int64)
+ ndarray[i8] result = np.empty(n, dtype=np.int64)
if tz is not None:
tso = _TSObject()
@@ -1455,11 +1481,12 @@ def date_normalize(ndarray[int64_t] stamps, tz=None):
return result
-cdef _normalize_local(ndarray[int64_t] stamps, object tz):
+
+cdef ndarray[i8] _normalize_local(ndarray[i8] stamps, object tz):
cdef:
Py_ssize_t n = len(stamps)
- ndarray[int64_t] result = np.empty(n, dtype=np.int64)
- ndarray[int64_t] trans, deltas, pos
+ ndarray[i8] result = np.empty(n, dtype=np.int64)
+ ndarray[i8] trans, deltas, pos
pandas_datetimestruct dts
if _is_utc(tz):
@@ -1511,7 +1538,8 @@ cdef _normalize_local(ndarray[int64_t] stamps, object tz):
return result
-cdef inline int64_t _normalized_stamp(pandas_datetimestruct *dts):
+
+cdef inline i8 _normalized_stamp(pandas_datetimestruct *dts):
dts.hour = 0
dts.min = 0
dts.sec = 0
@@ -1519,7 +1547,7 @@ cdef inline int64_t _normalized_stamp(pandas_datetimestruct *dts):
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-cdef inline void m8_populate_tsobject(int64_t stamp, _TSObject tso, object tz):
+cdef inline void m8_populate_tsobject(i8 stamp, _TSObject tso, object tz):
tso.value = stamp
pandas_datetime_to_datetimestruct(tso.value, PANDAS_FR_ns, &tso.dts)
@@ -1527,7 +1555,7 @@ cdef inline void m8_populate_tsobject(int64_t stamp, _TSObject tso, object tz):
_localize_tso(tso, tz)
-def dates_normalized(ndarray[int64_t] stamps, tz=None):
+cpdef bint dates_normalized(ndarray[i8] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
pandas_datetimestruct dts
@@ -1565,26 +1593,29 @@ def dates_normalized(ndarray[int64_t] stamps, tz=None):
# Some general helper functions
#----------------------------------------------------------------------
-def isleapyear(int64_t year):
+cpdef bint isleapyear(i8 year):
return is_leapyear(year)
-def monthrange(int64_t year, int64_t month):
+
+cpdef tuple monthrange(i8 year, i8 month):
cdef:
- int64_t days
- int64_t day_of_week
+ i8 days
+ i8 day_of_week
if month < 1 or month > 12:
raise ValueError("bad month number 0; must be 1-12")
- days = days_per_month_table[is_leapyear(year)][month-1]
+ days = days_per_month_table[is_leapyear(year)][month - 1]
+
+ return dayofweek(year, month, 1), days
- return (dayofweek(year, month, 1), days)
-cdef inline int64_t ts_dayofweek(_TSObject ts):
- return dayofweek(ts.dts.year, ts.dts.month, ts.dts.day)
+cdef inline i8 ts_dayofweek(_TSObject ts):
+ cdef i8 result = dayofweek(ts.dts.year, ts.dts.month, ts.dts.day)
+ return result
-cpdef normalize_date(object dt):
+cpdef object normalize_date(object dt):
'''
Normalize datetime.datetime value to midnight. Returns datetime.date as a
datetime.datetime at midnight
@@ -1600,12 +1631,13 @@ cpdef normalize_date(object dt):
else:
raise TypeError('Unrecognized type: %s' % type(dt))
-cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
- int freq, object tz):
+
+cdef ndarray[i8] localize_dt64arr_to_period(ndarray[i8] stamps, int freq,
+ object tz):
cdef:
Py_ssize_t n = len(stamps)
- ndarray[int64_t] result = np.empty(n, dtype=np.int64)
- ndarray[int64_t] trans, deltas, pos
+ ndarray[i8] result = np.empty(n, dtype=np.int64)
+ ndarray[i8] trans, deltas, pos
pandas_datetimestruct dts
if not have_pytz:
@@ -1618,7 +1650,8 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
continue
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec, dts.us,
+ freq)
elif _is_tzlocal(tz):
for i in range(n):
@@ -1633,7 +1666,8 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
pandas_datetime_to_datetimestruct(stamps[i] + delta,
PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec, dts.us,
+ freq)
else:
# Adjust datetime64 timestamp, recompute datetimestruct
trans = _get_transitions(tz)
@@ -1652,7 +1686,9 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
pandas_datetime_to_datetimestruct(stamps[i] + deltas[0],
PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec,
+ dts.us,
+ freq)
else:
for i in range(n):
if stamps[i] == NPY_NAT:
@@ -1661,70 +1697,76 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
pandas_datetime_to_datetimestruct(stamps[i] + deltas[pos[i]],
PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec,
+ dts.us, freq)
return result
cdef extern from "period.h":
ctypedef struct date_info:
- int64_t absdate
- double abstime
- double second
- int minute
- int hour
- int day
- int month
- int quarter
- int year
- int day_of_week
- int day_of_year
- int calendar
+ i8 absdate
+ i8 abstime
+
+ i8 attosecond
+ i8 femtosecond
+ i8 picosecond
+ i8 nanosecond
+ i8 microsecond
+ i8 second
+ i8 minute
+ i8 hour
+ i8 day
+ i8 month
+ i8 quarter
+ i8 year
+ i8 day_of_week
+ i8 day_of_year
+ i8 calendar
ctypedef struct asfreq_info:
- int from_week_end
- int to_week_end
-
- int from_a_year_end
- int to_a_year_end
-
- int from_q_year_end
- int to_q_year_end
-
- ctypedef int64_t (*freq_conv_func)(int64_t, char, asfreq_info*)
-
- int64_t asfreq(int64_t dtordinal, int freq1, int freq2, char relation) except INT32_MIN
- freq_conv_func get_asfreq_func(int fromFreq, int toFreq)
- void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info)
-
- int64_t get_period_ordinal(int year, int month, int day,
- int hour, int minute, int second,
- int freq) except INT32_MIN
-
- int64_t get_python_ordinal(int64_t period_ordinal, int freq) except INT32_MIN
-
- int get_date_info(int64_t ordinal, int freq, date_info *dinfo) except INT32_MIN
- double getAbsTime(int, int64_t, int64_t)
-
- int pyear(int64_t ordinal, int freq) except INT32_MIN
- int pqyear(int64_t ordinal, int freq) except INT32_MIN
- int pquarter(int64_t ordinal, int freq) except INT32_MIN
- int pmonth(int64_t ordinal, int freq) except INT32_MIN
- int pday(int64_t ordinal, int freq) except INT32_MIN
- int pweekday(int64_t ordinal, int freq) except INT32_MIN
- int pday_of_week(int64_t ordinal, int freq) except INT32_MIN
- int pday_of_year(int64_t ordinal, int freq) except INT32_MIN
- int pweek(int64_t ordinal, int freq) except INT32_MIN
- int phour(int64_t ordinal, int freq) except INT32_MIN
- int pminute(int64_t ordinal, int freq) except INT32_MIN
- int psecond(int64_t ordinal, int freq) except INT32_MIN
+ i8 from_week_end
+ i8 to_week_end
+
+ i8 from_a_year_end
+ i8 to_a_year_end
+
+ i8 from_q_year_end
+ i8 to_q_year_end
+
+ ctypedef i8 (*freq_conv_func)(i8, char*, asfreq_info*)
+
+ i8 asfreq(i8 ordinal, i8 freq1, i8 freq2, char* relation) except INT64_MIN
+ freq_conv_func get_asfreq_func(i8 fromFreq, i8 toFreq)
+ void get_asfreq_info(i8 fromFreq, i8 toFreq, asfreq_info *af_info)
+
+ i8 get_period_ordinal(i8 year, i8 month, i8 day, i8 hour, i8 minute,
+ i8 second, i8 microsecond, i8 freq) except INT64_MIN
+
+ i8 get_python_ordinal(i8 period_ordinal, i8 freq) except INT64_MIN
+
+ i8 get_date_info(i8 ordinal, i8 freq, date_info *dinfo) except INT64_MIN
+
+ i8 pyear(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pqyear(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pquarter(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pmonth(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pday(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pweekday(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pday_of_week(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pday_of_year(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pweek(i8 ordinal, i8 freq) except INT64_MIN
+ i8 phour(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pminute(i8 ordinal, i8 freq) except INT64_MIN
+ i8 psecond(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pmicrosecond(i8 ordinal, i8 freq) except INT64_MIN
char *c_strftime(date_info *dinfo, char *fmt)
- int get_yq(int64_t ordinal, int freq, int *quarter, int *year)
+ i8 get_yq(i8 ordinal, i8 freq, i8 *quarter, i8 *year)
# Period logic
#----------------------------------------------------------------------
-cdef inline int64_t apply_mult(int64_t period_ord, int64_t mult):
+cdef inline i8 apply_mult(i8 period_ord, i8 mult):
"""
Get freq+multiple ordinal value from corresponding freq-only ordinal value.
For example, 5min ordinal will be 1/5th the 1min ordinal (rounding down to
@@ -1735,7 +1777,8 @@ cdef inline int64_t apply_mult(int64_t period_ord, int64_t mult):
return (period_ord - 1) // mult
-cdef inline int64_t remove_mult(int64_t period_ord_w_mult, int64_t mult):
+
+cdef inline i8 remove_mult(i8 period_ord_w_mult, i8 mult):
"""
Get freq-only ordinal value from corresponding freq+multiple ordinal.
"""
@@ -1744,13 +1787,15 @@ cdef inline int64_t remove_mult(int64_t period_ord_w_mult, int64_t mult):
return period_ord_w_mult * mult + 1;
-def dt64arr_to_periodarr(ndarray[int64_t] dtarr, int freq, tz=None):
+
+cpdef ndarray[i8] dt64arr_to_periodarr(ndarray[i8] dtarr, int freq,
+ object tz=None):
"""
Convert array of datetime64 values (passed in as 'i8' dtype) to a set of
periods corresponding to desired frequency, per period convention.
"""
cdef:
- ndarray[int64_t] out
+ ndarray[i8] out
Py_ssize_t i, l
pandas_datetimestruct dts
@@ -1762,63 +1807,54 @@ def dt64arr_to_periodarr(ndarray[int64_t] dtarr, int freq, tz=None):
for i in range(l):
pandas_datetime_to_datetimestruct(dtarr[i], PANDAS_FR_ns, &dts)
out[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec, dts.us,
+ freq)
else:
out = localize_dt64arr_to_period(dtarr, freq, tz)
return out
-def periodarr_to_dt64arr(ndarray[int64_t] periodarr, int freq):
+
+cpdef ndarray[i8] periodarr_to_dt64arr(ndarray[i8] periodarr, int freq):
"""
Convert array to datetime64 values from a set of ordinals corresponding to
periods per period convention.
"""
cdef:
- ndarray[int64_t] out
- Py_ssize_t i, l
-
- l = len(periodarr)
-
- out = np.empty(l, dtype='i8')
+ Py_ssize_t i, l = len(periodarr)
+ ndarray[i8] out = np.empty(l, dtype='i8')
for i in range(l):
out[i] = period_ordinal_to_dt64(periodarr[i], freq)
return out
-cdef char START = 'S'
-cdef char END = 'E'
-cpdef int64_t period_asfreq(int64_t period_ordinal, int freq1, int freq2,
- bint end):
+cpdef i8 period_asfreq(i8 period_ordinal, int freq1, int freq2,
+ char* relation):
"""
Convert period ordinal from one frequency to another, and if upsampling,
choose to use start ('S') or end ('E') of period.
"""
- cdef:
- int64_t retval
+ cdef i8 retval = asfreq(period_ordinal, freq1, freq2, relation)
- if end:
- retval = asfreq(period_ordinal, freq1, freq2, END)
- else:
- retval = asfreq(period_ordinal, freq1, freq2, START)
-
- if retval == INT32_MIN:
+ if retval == INT64_MIN:
raise ValueError('Frequency conversion failed')
return retval
-def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end):
+
+cpdef ndarray[i8] period_asfreq_arr(ndarray[i8] arr, int freq1, int freq2,
+ char* relation):
"""
Convert int64-array of period ordinals from one frequency to another, and
if upsampling, choose to use start ('S') or end ('E') of period.
"""
cdef:
- ndarray[int64_t] result
+ ndarray[i8] result
Py_ssize_t i, n
freq_conv_func func
asfreq_info finfo
- int64_t val, ordinal
- char relation
+ i8 val, ordinal
n = len(arr)
result = np.empty(n, dtype=np.int64)
@@ -1826,27 +1862,21 @@ def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end):
func = get_asfreq_func(freq1, freq2)
get_asfreq_info(freq1, freq2, &finfo)
- if end:
- relation = END
- else:
- relation = START
-
for i in range(n):
val = func(arr[i], relation, &finfo)
- if val == INT32_MIN:
+ if val == INT64_MIN:
raise ValueError("Unable to convert to desired frequency.")
result[i] = val
return result
-def period_ordinal(int y, int m, int d, int h, int min, int s, int freq):
- cdef:
- int64_t ordinal
- return get_period_ordinal(y, m, d, h, min, s, freq)
+cpdef i8 period_ordinal(i8 y, i8 m, i8 d, i8 h, i8 min, i8 s, i8 us, i8 freq):
+ cdef i8 ordinal = get_period_ordinal(y, m, d, h, min, s, us, freq)
+ return ordinal
-cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq):
+cpdef i8 period_ordinal_to_dt64(i8 ordinal, int freq):
cdef:
pandas_datetimestruct dts
date_info dinfo
@@ -1858,12 +1888,14 @@ cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq):
dts.day = dinfo.day
dts.hour = dinfo.hour
dts.min = dinfo.minute
- dts.sec = int(dinfo.second)
- dts.us = dts.ps = 0
+ dts.sec = dinfo.second
+ dts.us = dinfo.microsecond
+ dts.ps = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
-def period_format(int64_t value, int freq, object fmt=None):
+
+cpdef object period_format(i8 value, int freq, object fmt=None):
cdef:
int freq_group
@@ -1876,8 +1908,8 @@ def period_format(int64_t value, int freq, object fmt=None):
elif freq_group == 3000: # FR_MTH
fmt = b'%Y-%m'
elif freq_group == 4000: # WK
- left = period_asfreq(value, freq, 6000, 0)
- right = period_asfreq(value, freq, 6000, 1)
+ left = period_asfreq(value, freq, 6000, 'S')
+ right = period_asfreq(value, freq, 6000, 'E')
return '%s/%s' % (period_format(left, 6000),
period_format(right, 6000))
elif (freq_group == 5000 # BUS
@@ -1889,26 +1921,31 @@ def period_format(int64_t value, int freq, object fmt=None):
fmt = b'%Y-%m-%d %H:%M'
elif freq_group == 9000: # SEC
fmt = b'%Y-%m-%d %H:%M:%S'
+ elif freq_group == 10000: # USEC
+ fmt = b'%Y-%m-%d %H:%M:%S.%u'
else:
raise ValueError('Unknown freq: %d' % freq)
return _period_strftime(value, freq, fmt)
-cdef list extra_fmts = [(b"%q", b"^`AB`^"),
- (b"%f", b"^`CD`^"),
- (b"%F", b"^`EF`^")]
+cdef tuple extra_fmts = ((b"%q", b"^`AB`^"),
+ (b"%f", b"^`CD`^"),
+ (b"%F", b"^`EF`^"),
+ (b"%u", b"^`GH`^"))
+
-cdef list str_extra_fmts = ["^`AB`^", "^`CD`^", "^`EF`^"]
+cdef tuple str_extra_fmts = ("^`AB`^", "^`CD`^", "^`EF`^", "^`GH`^")
-cdef _period_strftime(int64_t value, int freq, object fmt):
+
+cdef object _period_strftime(i8 value, int freq, object fmt):
cdef:
Py_ssize_t i
date_info dinfo
char *formatted
object pat, repl, result
list found_pat = [False] * len(extra_fmts)
- int year, quarter
+ i8 year, quarter
if PyUnicode_Check(fmt):
fmt = fmt.encode('utf-8')
@@ -1937,6 +1974,8 @@ cdef _period_strftime(int64_t value, int freq, object fmt):
repl = '%.2d' % (year % 100)
elif i == 2:
repl = '%d' % year
+ elif i == 3:
+ repl = '%.6u' % dinfo.microsecond
result = result.replace(str_extra_fmts[i], repl)
@@ -1948,22 +1987,19 @@ cdef _period_strftime(int64_t value, int freq, object fmt):
# period accessors
-ctypedef int (*accessor)(int64_t ordinal, int freq) except INT32_MIN
+ctypedef i8 (*accessor)(i8 ordinal, i8 freq) except INT64_MIN
+
-def get_period_field(int code, int64_t value, int freq):
+cpdef i8 get_period_field(int code, i8 value, int freq):
cdef accessor f = _get_accessor_func(code)
return f(value, freq)
-def get_period_field_arr(int code, ndarray[int64_t] arr, int freq):
- cdef:
- Py_ssize_t i, sz
- ndarray[int64_t] out
- accessor f
-
- f = _get_accessor_func(code)
- sz = len(arr)
- out = np.empty(sz, dtype=np.int64)
+cpdef ndarray[i8] get_period_field_arr(int code, ndarray[i8] arr, int freq):
+ cdef:
+ Py_ssize_t i, sz = len(arr)
+ ndarray[i8] out = np.empty(sz, dtype=np.int64)
+ accessor f = _get_accessor_func(code)
for i in range(sz):
out[i] = f(arr[i], freq)
@@ -1971,7 +2007,6 @@ def get_period_field_arr(int code, ndarray[int64_t] arr, int freq):
return out
-
cdef accessor _get_accessor_func(int code):
if code == 0:
return &pyear
@@ -1995,14 +2030,16 @@ cdef accessor _get_accessor_func(int code):
return &pday_of_year
elif code == 10:
return &pweekday
+ elif code == 11:
+ return &pmicrosecond
else:
raise ValueError('Unrecognized code: %s' % code)
-def extract_ordinals(ndarray[object] values, freq):
+cpdef ndarray[i8] extract_ordinals(ndarray[object] values, freq):
cdef:
Py_ssize_t i, n = len(values)
- ndarray[int64_t] ordinals = np.empty(n, dtype=np.int64)
+ ndarray[i8] ordinals = np.empty(n, dtype=np.int64)
object p
for i in range(n):
@@ -2013,7 +2050,8 @@ def extract_ordinals(ndarray[object] values, freq):
return ordinals
-cpdef resolution(ndarray[int64_t] stamps, tz=None):
+
+cpdef int resolution(ndarray[i8] stamps, object tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
pandas_datetimestruct dts
@@ -2022,23 +2060,29 @@ cpdef resolution(ndarray[int64_t] stamps, tz=None):
if tz is not None:
if isinstance(tz, basestring):
tz = pytz.timezone(tz)
+
return _reso_local(stamps, tz)
else:
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
+
if curr_reso < reso:
reso = curr_reso
+
return reso
+
US_RESO = 0
S_RESO = 1
T_RESO = 2
H_RESO = 3
D_RESO = 4
+
cdef inline int _reso_stamp(pandas_datetimestruct *dts):
if dts.us != 0:
return US_RESO
@@ -2050,30 +2094,35 @@ cdef inline int _reso_stamp(pandas_datetimestruct *dts):
return H_RESO
return D_RESO
-cdef _reso_local(ndarray[int64_t] stamps, object tz):
+
+cdef int _reso_local(ndarray[i8] stamps, object tz):
cdef:
Py_ssize_t n = len(stamps)
int reso = D_RESO, curr_reso
- ndarray[int64_t] trans, deltas, pos
+ ndarray[i8] trans, deltas, pos
pandas_datetimestruct dts
+ i8 billion = 1000000000
if _is_utc(tz):
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
+
if curr_reso < reso:
reso = curr_reso
elif _is_tzlocal(tz):
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns,
&dts)
dt = datetime(dts.year, dts.month, dts.day, dts.hour,
dts.min, dts.sec, dts.us, tz)
- delta = int(total_seconds(_get_utcoffset(tz, dt))) * 1000000000
+ delta = total_seconds(_get_utcoffset(tz, dt)) * billion
pandas_datetime_to_datetimestruct(stamps[i] + delta,
PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
@@ -2084,8 +2133,10 @@ cdef _reso_local(ndarray[int64_t] stamps, object tz):
trans = _get_transitions(tz)
deltas = _get_deltas(tz)
_pos = trans.searchsorted(stamps, side='right') - 1
+
if _pos.dtype != np.int64:
_pos = _pos.astype(np.int64)
+
pos = _pos
# statictzinfo
@@ -2102,9 +2153,11 @@ cdef _reso_local(ndarray[int64_t] stamps, object tz):
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i] + deltas[pos[i]],
PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
+
if curr_reso < reso:
reso = curr_reso
| (Continues #2145)
Alrighty. This should be fine. There may be a few formatting changes that I rolled back through the git diff / apply patch, but all not-skipped tests pass so I figured it wasn't a big deal. Here's what I did to fix the problem, which was continually merging with upstream:
``` shell
function get_my_commits {
local curdir="$(pwd)"
cd "$1"
# %n adds a newline in addition to the file name
# sed removes blank lines here
# sort -u sorts and removes duplicates lines
git log --name-only --pretty=%n --author=Phillip 0eeff..master | sort -u | sed '/^$/d'
}
old_pandas=/path/to/evil/pandas/merge/of/death
new_pandas=/shiny/new/checkout/of/pandas
get_my_commits $old_pandas | while read file; do
# some boolean tests to filter out setup.py and missing files between versions
diff -pu "$old_pandas/$file" "$new_pandas/$file" | patch -p0 "$new_pandas/$file"
done
```
Needless to say I won't be doing any more merging of upstream into my branch. Thanks to those who helped me out here.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2670 | 2013-01-09T21:38:40Z | 2013-04-01T02:04:24Z | null | 2014-08-10T13:31:25Z |
add support for microsecond periods | diff --git a/pandas/src/datetime.pxd b/pandas/src/datetime.pxd
index 1a977aab48514..9ae116b06c3b2 100644
--- a/pandas/src/datetime.pxd
+++ b/pandas/src/datetime.pxd
@@ -1,7 +1,5 @@
-from numpy cimport int64_t, int32_t, npy_int64, npy_int32, ndarray
-from cpython cimport PyObject
-
-from cpython cimport PyUnicode_Check, PyUnicode_AsASCIIString
+from numpy cimport int64_t as i8, int32_t as i4
+from cpython cimport PyObject, PyUnicode_Check, PyUnicode_AsASCIIString
cdef extern from "headers/stdint.h":
@@ -9,7 +7,6 @@ cdef extern from "headers/stdint.h":
enum: INT32_MIN
-
cdef extern from "datetime.h":
ctypedef class datetime.date [object PyDateTime_Date]:
@@ -45,8 +42,8 @@ cdef extern from "datetime_helper.h":
cdef extern from "numpy/ndarrayobject.h":
- ctypedef int64_t npy_timedelta
- ctypedef int64_t npy_datetime
+ ctypedef i8 npy_timedelta
+ ctypedef i8 npy_datetime
ctypedef enum NPY_CASTING:
NPY_NO_CASTING
@@ -82,8 +79,7 @@ cdef extern from "datetime/np_datetime.h":
PANDAS_FR_as
ctypedef struct pandas_datetimestruct:
- npy_int64 year
- npy_int32 month, day, hour, min, sec, us, ps, as
+ i8 year, month, day, hour, min, sec, us, ps, as
int convert_pydatetime_to_datetimestruct(PyObject *obj,
pandas_datetimestruct *out,
@@ -98,7 +94,7 @@ cdef extern from "datetime/np_datetime.h":
int days_per_month_table[2][12]
int dayofweek(int y, int m, int d)
- int is_leapyear(int64_t year)
+ int is_leapyear(i8 year)
PANDAS_DATETIMEUNIT get_datetime64_unit(object o)
cdef extern from "datetime/np_datetime_strings.h":
@@ -145,7 +141,7 @@ cdef inline int _cstring_to_dts(char *val, int length,
return result
-cdef inline object _datetime64_to_datetime(int64_t val):
+cdef inline object _datetime64_to_datetime(i8 val):
cdef pandas_datetimestruct dts
pandas_datetime_to_datetimestruct(val, PANDAS_FR_ns, &dts)
return _dts_to_pydatetime(&dts)
@@ -155,7 +151,7 @@ cdef inline object _dts_to_pydatetime(pandas_datetimestruct *dts):
dts.day, dts.hour,
dts.min, dts.sec, dts.us)
-cdef inline int64_t _pydatetime_to_dts(object val, pandas_datetimestruct *dts):
+cdef inline i8 _pydatetime_to_dts(object val, pandas_datetimestruct *dts):
dts.year = PyDateTime_GET_YEAR(val)
dts.month = PyDateTime_GET_MONTH(val)
dts.day = PyDateTime_GET_DAY(val)
@@ -166,8 +162,7 @@ cdef inline int64_t _pydatetime_to_dts(object val, pandas_datetimestruct *dts):
dts.ps = dts.as = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-cdef inline int64_t _dtlike_to_datetime64(object val,
- pandas_datetimestruct *dts):
+cdef inline i8 _dtlike_to_datetime64(object val, pandas_datetimestruct *dts):
dts.year = val.year
dts.month = val.month
dts.day = val.day
@@ -178,12 +173,10 @@ cdef inline int64_t _dtlike_to_datetime64(object val,
dts.ps = dts.as = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-cdef inline int64_t _date_to_datetime64(object val,
- pandas_datetimestruct *dts):
+cdef inline i8 _date_to_datetime64(object val, pandas_datetimestruct *dts):
dts.year = PyDateTime_GET_YEAR(val)
dts.month = PyDateTime_GET_MONTH(val)
dts.day = PyDateTime_GET_DAY(val)
dts.hour = dts.min = dts.sec = dts.us = 0
dts.ps = dts.as = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-
diff --git a/pandas/src/period.c b/pandas/src/period.c
index 4e7ab44c7b150..72819a54d34da 100644
--- a/pandas/src/period.c
+++ b/pandas/src/period.c
@@ -13,62 +13,68 @@
* Code derived from scikits.timeseries
* ------------------------------------------------------------------*/
+#include <stdbool.h>
-static int mod_compat(int x, int m) {
- int result = x % m;
- if (result < 0) return result + m;
- return result;
+static i8
+mod_compat(i8 x, i8 m)
+{
+ i8 result = x % m;
+
+ if (result < 0)
+ result += m;
+
+ return result;
}
-static int floordiv(int x, int divisor) {
- if (x < 0) {
- if (mod_compat(x, divisor)) {
- return x / divisor - 1;
- }
- else return x / divisor;
- } else {
- return x / divisor;
- }
+static i8
+floordiv(i8 x, i8 divisor)
+{
+ i8 x_div_d = x / divisor;
+
+ if (x < 0 && mod_compat(x, divisor))
+ --x_div_d;
+
+ return x_div_d;
}
static asfreq_info NULL_AF_INFO;
/* Table with day offsets for each month (0-based, without and with leap) */
-static int month_offset[2][13] = {
+static i8 month_offset[2][13] = {
{ 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
{ 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
};
/* Table of number of days in a month (0-based, without and with leap) */
-static int days_in_month[2][12] = {
+static i8 days_in_month[2][12] = {
{ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 },
{ 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }
};
/* Return 1/0 iff year points to a leap year in calendar. */
-static int dInfoCalc_Leapyear(npy_int64 year, int calendar)
+static bool
+dInfoCalc_Leapyear(i8 year, enum CalendarType calendar)
{
- if (calendar == GREGORIAN_CALENDAR) {
- return (year % 4 == 0) && ((year % 100 != 0) || (year % 400 == 0));
- } else {
- return (year % 4 == 0);
- }
+ bool ymod4_is0 = year % 4 == 0;
+
+ if (calendar == GREGORIAN)
+ return ymod4_is0 && (year % 100 != 0 || year % 400 == 0);
+
+ return ymod4_is0;
}
/* Return the day of the week for the given absolute date. */
-static int dInfoCalc_DayOfWeek(npy_int64 absdate)
+static i8
+dInfoCalc_DayOfWeek(i8 absdate)
{
- int day_of_week;
-
- if (absdate >= 1) {
- day_of_week = (absdate - 1) % 7;
- } else {
- day_of_week = 6 - ((-absdate) % 7);
- }
- return day_of_week;
+ return absdate >= 1 ? (absdate - 1) % 7 : 6 - (-absdate % 7);
}
-static int monthToQuarter(int month) { return ((month-1)/3)+1; }
+static i8
+monthToQuarter(i8 month)
+{
+ return (month - 1) / 3 + 1;
+}
/* Return the year offset, that is the absolute date of the day
31.12.(year-1) in the given calendar.
@@ -78,66 +84,74 @@ static int monthToQuarter(int month) { return ((month-1)/3)+1; }
using the Gregorian Epoch) value by two days because the Epoch
(0001-01-01) in the Julian calendar lies 2 days before the Epoch in
the Gregorian calendar. */
-static int dInfoCalc_YearOffset(npy_int64 year, int calendar)
+static i8
+dInfoCalc_YearOffset(i8 yr, enum CalendarType calendar)
{
- year--;
- if (calendar == GREGORIAN_CALENDAR) {
- if (year >= 0 || -1/4 == -1)
- return year*365 + year/4 - year/100 + year/400;
- else
- return year*365 + (year-3)/4 - (year-99)/100 + (year-399)/400;
- }
- else if (calendar == JULIAN_CALENDAR) {
- if (year >= 0 || -1/4 == -1)
- return year*365 + year/4 - 2;
+ i8 year = yr;
+ --year;
+
+ if (calendar == GREGORIAN)
+ if (year >= 0 || -1 / 4 == -1)
+ return year * 365 + year / 4 - year / 100 + year / 400;
+ else
+ return year * 365 + (year - 3) / 4 - (year - 99) / 100 +
+ (year - 399) / 400;
+ else if (calendar == JULIAN)
+ if (year >= 0 || -1 / 4 == -1)
+ return year * 365 + year / 4 - 2;
+ else
+ return year * 365 + (year - 3) / 4 - 2;
else
- return year*365 + (year-3)/4 - 2;
- }
- Py_Error(PyExc_ValueError, "unknown calendar");
- onError:
+ Py_Error(PyExc_ValueError, "unknown calendar");
+onError:
return INT_ERR_CODE;
}
/* Set the instance's value using the given date and time. calendar may be set
- * to the flags: GREGORIAN_CALENDAR, JULIAN_CALENDAR to indicate the calendar
+ * to the flags: GREGORIAN, JULIAN to indicate the calendar
* to be used. */
-static int dInfoCalc_SetFromDateAndTime(struct date_info *dinfo,
- int year, int month, int day, int hour, int minute, double second,
- int calendar)
+static i8
+dInfoCalc_SetFromDateAndTime(date_info *dinfo, i8 year, i8 month, i8 day,
+ i8 hour, i8 minute, i8 second, i8 microsecond,
+ enum CalendarType calendar)
{
/* Calculate the absolute date */
{
- int leap;
- npy_int64 absdate;
- int yearoffset;
+ i8 leap, absdate, yearoffset;
/* Range check */
Py_AssertWithArg(year > -(INT_MAX / 366) && year < (INT_MAX / 366),
- PyExc_ValueError,
- "year out of range: %i",
- year);
+ PyExc_ValueError,
+ "year out of range: %li",
+ year);
/* Is it a leap year ? */
leap = dInfoCalc_Leapyear(year, calendar);
/* Negative month values indicate months relative to the years end */
- if (month < 0) month += 13;
+ if (month < 0)
+ month += 13;
+
Py_AssertWithArg(month >= 1 && month <= 12,
- PyExc_ValueError,
- "month out of range (1-12): %i",
- month);
+ PyExc_ValueError,
+ "month out of range (1-12): %li",
+ month);
/* Negative values indicate days relative to the months end */
- if (day < 0) day += days_in_month[leap][month - 1] + 1;
+ if (day < 0)
+ day += days_in_month[leap][month - 1] + 1;
+
Py_AssertWithArg(day >= 1 && day <= days_in_month[leap][month - 1],
- PyExc_ValueError,
- "day out of range: %i",
- day);
+ PyExc_ValueError,
+ "day out of range: %li",
+ day);
yearoffset = dInfoCalc_YearOffset(year, calendar);
- if (PyErr_Occurred()) goto onError;
+
+ if (PyErr_Occurred())
+ goto onError;
absdate = day + month_offset[leap][month - 1] + yearoffset;
@@ -145,11 +159,11 @@ static int dInfoCalc_SetFromDateAndTime(struct date_info *dinfo,
dinfo->year = year;
dinfo->month = month;
- dinfo->quarter = ((month-1)/3)+1;
+ dinfo->quarter = (month - 1) / 3 + 1;
dinfo->day = day;
dinfo->day_of_week = dInfoCalc_DayOfWeek(absdate);
- dinfo->day_of_year = (short)(absdate - yearoffset);
+ dinfo->day_of_year = absdate - yearoffset;
dinfo->calendar = calendar;
}
@@ -157,60 +171,74 @@ static int dInfoCalc_SetFromDateAndTime(struct date_info *dinfo,
/* Calculate the absolute time */
{
Py_AssertWithArg(hour >= 0 && hour <= 23,
- PyExc_ValueError,
- "hour out of range (0-23): %i",
- hour);
+ PyExc_ValueError,
+ "hour out of range (0-23): %li",
+ hour);
Py_AssertWithArg(minute >= 0 && minute <= 59,
- PyExc_ValueError,
- "minute out of range (0-59): %i",
- minute);
- Py_AssertWithArg(second >= (double)0.0 &&
- (second < (double)60.0 ||
- (hour == 23 && minute == 59 &&
- second < (double)61.0)),
- PyExc_ValueError,
- "second out of range (0.0 - <60.0; <61.0 for 23:59): %f",
- second);
-
- dinfo->abstime = (double)(hour*3600 + minute*60) + second;
+ PyExc_ValueError,
+ "minute out of range (0-59): %li",
+ minute);
+ Py_AssertWithArg(second >= 0 &&
+ (second < 60L || (hour == 23 && minute == 59 &&
+ second < 61)),
+ PyExc_ValueError,
+ "second out of range (0 - <60L; <61 for 23:59): %li",
+ second);
+ Py_AssertWithArg(microsecond >= 0 && (microsecond < 1000000 ||
+ (hour == 23 && minute == 59 &&
+ second < 61 &&
+ microsecond < 1000001)),
+ PyExc_ValueError,
+ "microsecond out of range (0 - <100000; <100001 for 23:59:59): %li",
+ microsecond);
+
+ dinfo->abstime = hour * US_PER_HOUR + minute * US_PER_MINUTE +
+ second * US_PER_SECOND + microsecond;
dinfo->hour = hour;
dinfo->minute = minute;
dinfo->second = second;
+ dinfo->microsecond = microsecond;
}
+
return 0;
- onError:
+onError:
return INT_ERR_CODE;
}
/* Sets the date part of the date_info struct using the indicated
calendar.
- XXX This could also be done using some integer arithmetics rather
- than with this iterative approach... */
-static
-int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
- npy_int64 absdate, int calendar)
+ XXX This could also be done using some i8eger arithmetics rather
+ than with this iterative approach... */
+static i8
+dInfoCalc_SetFromAbsDate(register date_info *dinfo, i8 absdate,
+ enum CalendarType calendar)
{
- register npy_int64 year;
- npy_int64 yearoffset;
- int leap,dayoffset;
- int *monthoffset;
+ register i8 year;
+ i8 yearoffset, leap, dayoffset, *monthoffset = NULL;
+
/* Approximate year */
- if (calendar == GREGORIAN_CALENDAR) {
- year = (npy_int64)(((double)absdate) / 365.2425);
- } else if (calendar == JULIAN_CALENDAR) {
- year = (npy_int64)(((double)absdate) / 365.25);
- } else {
+ switch (calendar) {
+ case GREGORIAN:
+ year = absdate / 365.2425;
+ break;
+ case JULIAN:
+ year = absdate / 365.25;
+ break;
+ default:
Py_Error(PyExc_ValueError, "unknown calendar");
+ break;
}
- if (absdate > 0) year++;
+ if (absdate > 0)
+ ++year;
/* Apply corrections to reach the correct year */
while (1) {
+
/* Calculate the year offset */
yearoffset = dInfoCalc_YearOffset(year, calendar);
if (PyErr_Occurred())
@@ -219,7 +247,7 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
/* Backward correction: absdate must be greater than the
yearoffset */
if (yearoffset >= absdate) {
- year--;
+ --year;
continue;
}
@@ -228,9 +256,10 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
/* Forward correction: non leap years only have 365 days */
if (dayoffset > 365 && !leap) {
- year++;
+ ++year;
continue;
}
+
break;
}
@@ -240,12 +269,11 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
/* Now iterate to find the month */
monthoffset = month_offset[leap];
{
- register int month;
+ register i8 month;
- for (month = 1; month < 13; month++) {
+ for (month = 1; month < 13; ++month)
if (monthoffset[month] >= dayoffset)
- break;
- }
+ break;
dinfo->month = month;
dinfo->quarter = monthToQuarter(month);
@@ -259,71 +287,88 @@ int dInfoCalc_SetFromAbsDate(register struct date_info *dinfo,
return 0;
- onError:
+onError:
return INT_ERR_CODE;
}
///////////////////////////////////////////////
// frequency specifc conversion routines
-// each function must take an integer fromDate and
+// each function must take an i8eger fromDate and
// a char relation ('S' or 'E' for 'START' or 'END')
///////////////////////////////////////////////////////////////////////
// helpers for frequency conversion routines //
-static npy_int64 DtoB_weekday(npy_int64 absdate) {
- return (((absdate) / 7) * 5) + (absdate) % 7 - BDAY_OFFSET;
+static i8
+DtoB_weekday(i8 absdate)
+{
+ return (absdate / 7) * 5 + absdate % 7 - BDAY_OFFSET;
}
-static npy_int64 DtoB_WeekendToMonday(npy_int64 absdate, int day_of_week) {
- if (day_of_week > 4) {
- //change to Monday after weekend
- absdate += (7 - day_of_week);
- }
+static i8
+DtoB_WeekendToMonday(i8 absdate, i8 day_of_week)
+{
+ if (day_of_week > 4)
+ // change to Monday after weekend
+ absdate += 7 - day_of_week;
+
return DtoB_weekday(absdate);
}
-static npy_int64 DtoB_WeekendToFriday(npy_int64 absdate, int day_of_week) {
- if (day_of_week > 4) {
- //change to friday before weekend
- absdate -= (day_of_week - 4);
- }
+static i8
+DtoB_WeekendToFriday(i8 absdate, i8 day_of_week)
+{
+ if (day_of_week > 4)
+ // change to friday before weekend
+ absdate -= day_of_week - 4;
+
return DtoB_weekday(absdate);
}
-static npy_int64 absdate_from_ymd(int y, int m, int d) {
- struct date_info tempDate;
- if (dInfoCalc_SetFromDateAndTime(&tempDate, y, m, d, 0, 0, 0, GREGORIAN_CALENDAR)) {
+static i8
+absdate_from_ymd(i8 y, i8 m, i8 d)
+{
+ date_info td;
+
+ if (dInfoCalc_SetFromDateAndTime(&td, y, m, d, 0, 0, 0, 0, GREGORIAN))
return INT_ERR_CODE;
- }
- return tempDate.absdate;
+
+ return td.absdate;
}
//************ FROM DAILY ***************
-static npy_int64 asfreq_DtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_DtoA(i8 ordinal, const char *relation, asfreq_info *af_info)
+{
+ date_info dinfo;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
- if (dinfo.month > af_info->to_a_year_end) {
- return (npy_int64)(dinfo.year + 1 - BASE_YEAR);
- }
- else {
- return (npy_int64)(dinfo.year - BASE_YEAR);
- }
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
+ return INT_ERR_CODE;
+
+ if (dinfo.month > af_info->to_a_year_end)
+ return dinfo.year + 1 - BASE_YEAR;
+ else
+ return dinfo.year - BASE_YEAR;
}
-static npy_int64 DtoQ_yq(npy_int64 ordinal, asfreq_info *af_info,
- int *year, int *quarter) {
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+static i8
+DtoQ_yq(i8 ordinal, asfreq_info *af_info, i8 *year, i8 *quarter)
+{
+ date_info dinfo;
+
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
+ return INT_ERR_CODE;
+
if (af_info->to_q_year_end != 12) {
dinfo.month -= af_info->to_q_year_end;
- if (dinfo.month <= 0) { dinfo.month += 12; }
- else { dinfo.year += 1; }
+
+ if (dinfo.month <= 0)
+ dinfo.month += 12;
+ else
+ ++dinfo.year;
+
dinfo.quarter = monthToQuarter(dinfo.month);
}
@@ -334,636 +379,1122 @@ static npy_int64 DtoQ_yq(npy_int64 ordinal, asfreq_info *af_info,
}
-static npy_int64 asfreq_DtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info) {
-
- int year, quarter;
+static i8
+asfreq_DtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 year, quarter;
- if (DtoQ_yq(ordinal, af_info, &year, &quarter) == INT_ERR_CODE) {
+ if (DtoQ_yq(ordinal, af_info, &year, &quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
- }
- return (npy_int64)((year - BASE_YEAR) * 4 + quarter - 1);
+ return (year - BASE_YEAR) * 4 + quarter - 1;
}
-static npy_int64 asfreq_DtoM(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_DtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN_CALENDAR))
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
return INT_ERR_CODE;
- return (npy_int64)((dinfo.year - BASE_YEAR) * 12 + dinfo.month - 1);
+
+ return (dinfo.year - BASE_YEAR) * 12 + dinfo.month - 1;
}
-static npy_int64 asfreq_DtoW(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return (ordinal + ORD_OFFSET - (1 + af_info->to_week_end))/7 + 1 - WEEK_OFFSET;
+static i8
+asfreq_DtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return (ordinal + ORD_OFFSET - (1 + af_info->to_week_end)) / 7
+ + 1 - WEEK_OFFSET;
}
-static npy_int64 asfreq_DtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_DtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+ if (dInfoCalc_SetFromAbsDate(&dinfo, ordinal + ORD_OFFSET, GREGORIAN))
+ return INT_ERR_CODE;
- if (relation == 'S') {
+ if (!strcmp(relation, "S"))
return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week);
- } else {
+ else
return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week);
- }
}
// needed for getDateInfo function
-static npy_int64 asfreq_DtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) { return ordinal; }
+static i8
+asfreq_DtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal;
+}
-static npy_int64 asfreq_DtoHIGHFREQ(npy_int64 ordinal, char relation, npy_int64 per_day) {
- if (relation == 'S') {
- return ordinal * per_day;
- }
- else {
- return (ordinal+ 1) * per_day - 1;
- }
+static i8
+asfreq_DtoHIGHFREQ(i8 ordinal, const char* relation, i8 per_day)
+{
+ if (!strcmp(relation, "S"))
+ return ordinal * per_day;
+ else
+ return (ordinal + 1) * per_day - 1;
+}
+
+static i8
+asfreq_StoHIGHFREQ(i8 ordinal, const char* relation, i8 per_sec)
+{
+ if (!strcmp(relation, "S"))
+ return ordinal * per_sec;
+ else
+ return (ordinal + 1) * per_sec - 1;
+}
+
+static i8
+asfreq_DtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, 24L);
+}
+
+static i8
+asfreq_DtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, 24 * 60L);
+}
+
+static i8
+asfreq_DtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, 24 * 60L * 60L);
+}
+
+static i8
+asfreq_DtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoHIGHFREQ(ordinal, relation, US_PER_DAY);
}
-static npy_int64 asfreq_DtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoHIGHFREQ(ordinal, relation, 24); }
-static npy_int64 asfreq_DtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoHIGHFREQ(ordinal, relation, 24*60); }
-static npy_int64 asfreq_DtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoHIGHFREQ(ordinal, relation, 24*60*60); }
//************ FROM SECONDLY ***************
-static npy_int64 asfreq_StoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return (ordinal)/(60*60*24); }
+static i8
+asfreq_StoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / (60L * 60L * 24L);
+}
-static npy_int64 asfreq_StoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_StoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_StoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_StoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_StoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_StoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_StoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_StoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_StoB(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoB(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_StoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_StoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_StoT(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_StoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
return ordinal / 60;
}
-static npy_int64 asfreq_StoH(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return ordinal / (60*60);
+static i8
+asfreq_StoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / 3600;
}
//************ FROM MINUTELY ***************
-static npy_int64 asfreq_TtoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return (ordinal)/(60*24); }
-
-static npy_int64 asfreq_TtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_TtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_TtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_TtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_TtoB(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoB(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-
-static npy_int64 asfreq_TtoH(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return ordinal / 60;
+static i8
+asfreq_TtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / (60 * 24);
}
-static npy_int64 asfreq_TtoS(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- if (relation == 'S') {
- return ordinal*60; }
- else {
- return ordinal*60 + 59;
- }
+static i8
+asfreq_TtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_TtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_TtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_TtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_TtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_TtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_TtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / 60;
+}
+
+static i8
+asfreq_TtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 out = ordinal * 60;
+
+ if (strcmp(relation, "S"))
+ out += 59;
+
+ return out;
}
//************ FROM HOURLY ***************
-static npy_int64 asfreq_HtoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return ordinal / 24; }
-static npy_int64 asfreq_HtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_HtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_HtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_HtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
-static npy_int64 asfreq_HtoB(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoB(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_HtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / 24;
+}
+
+static i8
+asfreq_HtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_HtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_HtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_HtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_HtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_HtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
// calculation works out the same as TtoS, so we just call that function for HtoT
-static npy_int64 asfreq_HtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_TtoS(ordinal, relation, &NULL_AF_INFO); }
+static i8
+asfreq_HtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_TtoS(ordinal, relation, &NULL_AF_INFO);
+}
-static npy_int64 asfreq_HtoS(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- if (relation == 'S') {
- return ordinal*60*60;
- }
- else {
- return (ordinal + 1)*60*60 - 1;
- }
+static i8
+asfreq_HtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 ret, secs_per_hour = 3600;
+
+ if (!strcmp(relation, "S"))
+ ret = ordinal * secs_per_hour;
+ else
+ ret = (ordinal + 1) * secs_per_hour - 1;
+
+ return ret;
}
//************ FROM BUSINESS ***************
-static npy_int64 asfreq_BtoD(npy_int64 ordinal, char relation, asfreq_info *af_info)
- {
- ordinal += BDAY_OFFSET;
- return (((ordinal - 1) / 5) * 7 +
- mod_compat(ordinal - 1, 5) + 1 - ORD_OFFSET);
- }
+static i8
+asfreq_BtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 ord = ordinal;
+ ord += BDAY_OFFSET;
+ return (ord - 1) / 5 * 7 + mod_compat(ord - 1, 5) + 1 - ORD_OFFSET;
+}
-static npy_int64 asfreq_BtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_BtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_BtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_BtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_BtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_BtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_BtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
-static npy_int64 asfreq_BtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_BtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_BtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_BtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_BtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
//************ FROM WEEKLY ***************
-static npy_int64 asfreq_WtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- ordinal += WEEK_OFFSET;
- if (relation == 'S') {
- return ordinal * 7 - 6 + af_info->from_week_end - ORD_OFFSET;
- }
- else {
- return ordinal * 7 + af_info->from_week_end - ORD_OFFSET;
- }
+static i8
+asfreq_WtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 k = 0, ord = ordinal;
+ ord += WEEK_OFFSET;
+ ord *= 7;
+
+ if (!strcmp(relation, "S"))
+ k = -6;
+
+ return ord + k + af_info->from_week_end - ORD_OFFSET;
+}
+
+static i8
+asfreq_WtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_WtoD(ordinal, "E", af_info), relation, af_info);
}
-static npy_int64 asfreq_WtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoA(asfreq_WtoD(ordinal, 'E', af_info), relation, af_info); }
-static npy_int64 asfreq_WtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoQ(asfreq_WtoD(ordinal, 'E', af_info), relation, af_info); }
-static npy_int64 asfreq_WtoM(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoM(asfreq_WtoD(ordinal, 'E', af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_WtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_WtoD(ordinal, "E", af_info), relation, af_info);
+}
-static npy_int64 asfreq_WtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_WtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_WtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_WtoD(ordinal, "E", af_info), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_WtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_WtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_WtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_WtoD(ordinal, relation, af_info) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+static i8
+asfreq_WtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 wtod = asfreq_WtoD(ordinal, relation, af_info) + ORD_OFFSET;
- if (relation == 'S') {
- return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week);
- }
- else {
- return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week);
- }
+ date_info dinfo;
+ if (dInfoCalc_SetFromAbsDate(&dinfo, wtod, GREGORIAN))
+ return INT_ERR_CODE;
+
+ i8 (*f)(i8 absdate, i8 day_of_week) = NULL;
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
+ return f(dinfo.absdate, dinfo.day_of_week);
+}
+
+static i8
+asfreq_WtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
}
-static npy_int64 asfreq_WtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_WtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_WtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_WtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_WtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_WtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_WtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_WtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
//************ FROM MONTHLY ***************
-static void MtoD_ym(npy_int64 ordinal, int *y, int *m) {
+static void
+MtoD_ym(i8 ordinal, i8 *y, i8 *m)
+{
*y = floordiv(ordinal, 12) + BASE_YEAR;
*m = mod_compat(ordinal, 12) + 1;
}
-static npy_int64 asfreq_MtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+static i8
+asfreq_MtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
- npy_int64 absdate;
- int y, m;
+ i8 absdate, y, m;
- if (relation == 'S') {
+ if (!strcmp(relation, "S")) {
MtoD_ym(ordinal, &y, &m);
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - ORD_OFFSET;
- } else {
+ }
+ else {
MtoD_ym(ordinal + 1, &y, &m);
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - 1 - ORD_OFFSET;
}
}
-static npy_int64 asfreq_MtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoA(asfreq_MtoD(ordinal, 'E', &NULL_AF_INFO), relation, af_info); }
+static i8
+asfreq_MtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_MtoD(ordinal, "E", &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_MtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_MtoD(ordinal, "E", &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_MtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_MtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+
+ date_info dinfo;
+ i8 mtod = asfreq_MtoD(ordinal, relation, &NULL_AF_INFO) + ORD_OFFSET;
-static npy_int64 asfreq_MtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoQ(asfreq_MtoD(ordinal, 'E', &NULL_AF_INFO), relation, af_info); }
+ if (dInfoCalc_SetFromAbsDate(&dinfo, mtod, GREGORIAN))
+ return INT_ERR_CODE;
-static npy_int64 asfreq_MtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, af_info); }
+ i8 (*f)(i8 absdate, i8 day_of_week);
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
-static npy_int64 asfreq_MtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+ return f(dinfo.absdate, dinfo.day_of_week);
+}
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_MtoD(ordinal, relation, &NULL_AF_INFO) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+static i8
+asfreq_MtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
- if (relation == 'S') { return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week); }
- else { return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week); }
+static i8
+asfreq_MtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
}
-static npy_int64 asfreq_MtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_MtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_MtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation, &NULL_AF_INFO); }
+static i8
+asfreq_MtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_MtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
//************ FROM QUARTERLY ***************
-static void QtoD_ym(npy_int64 ordinal, int *y, int *m, asfreq_info *af_info) {
+static void
+QtoD_ym(i8 ordinal, i8 *y, i8 *m, asfreq_info *af_info)
+{
*y = floordiv(ordinal, 4) + BASE_YEAR;
*m = mod_compat(ordinal, 4) * 3 + 1;
if (af_info->from_q_year_end != 12) {
*m += af_info->from_q_year_end;
- if (*m > 12) { *m -= 12; }
- else { *y -= 1; }
+
+ if (*m > 12)
+ *m -= 12;
+ else
+ --*y;
}
}
-static npy_int64 asfreq_QtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
-
- npy_int64 absdate;
- int y, m;
+static i8
+asfreq_QtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 absdate, y, m;
- if (relation == 'S') {
+ if (!strcmp(relation, "S")) {
QtoD_ym(ordinal, &y, &m, af_info);
- // printf("ordinal: %d, year: %d, month: %d\n", (int) ordinal, y, m);
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - ORD_OFFSET;
} else {
- QtoD_ym(ordinal+1, &y, &m, af_info);
- /* printf("ordinal: %d, year: %d, month: %d\n", (int) ordinal, y, m); */
- if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE) return INT_ERR_CODE;
+ QtoD_ym(ordinal + 1, &y, &m, af_info);
+
+ if ((absdate = absdate_from_ymd(y, m, 1)) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate - 1 - ORD_OFFSET;
}
}
-static npy_int64 asfreq_QtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_QtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_QtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_QtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+static i8
+asfreq_QtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_QtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_QtoA(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoA(asfreq_QtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_QtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 asfreq_QtoM(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- return asfreq_DtoM(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_QtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_QtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_QtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_QtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_QtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
+ i8 qtod = asfreq_QtoD(ordinal, relation, af_info) + ORD_OFFSET;
-static npy_int64 asfreq_QtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+ if (dInfoCalc_SetFromAbsDate(&dinfo, qtod, GREGORIAN))
+ return INT_ERR_CODE;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_QtoD(ordinal, relation, af_info) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+ i8 (*f)(i8 abdate, i8 day_of_week) = NULL;
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
- if (relation == 'S') { return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week); }
- else { return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week); }
+ return f(dinfo.absdate, dinfo.day_of_week);
}
-static npy_int64 asfreq_QtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_QtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_QtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_QtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_QtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_QtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_QtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
//************ FROM ANNUAL ***************
-static npy_int64 asfreq_AtoD(npy_int64 ordinal, char relation, asfreq_info *af_info) {
- npy_int64 absdate, final_adj;
- int year;
- int month = (af_info->from_a_year_end) % 12;
+static i8
+asfreq_AtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ i8 absdate, final_adj, year;
+ i8 month = af_info->from_a_year_end % 12, ord = ordinal;
// start from 1970
- ordinal += BASE_YEAR;
+ ord += BASE_YEAR;
- if (month == 0) { month = 1; }
- else { month += 1; }
+ if (month == 0)
+ month = 1;
+ else
+ ++month;
- if (relation == 'S') {
- if (af_info->from_a_year_end == 12) {year = ordinal;}
- else {year = ordinal - 1;}
+ if (!strcmp(relation, "S")) {
+ year = af_info->from_a_year_end == 12 ? ord : ord - 1;
final_adj = 0;
} else {
- if (af_info->from_a_year_end == 12) {year = ordinal+1;}
- else {year = ordinal;}
+ if (af_info->from_a_year_end == 12)
+ year = ord + 1;
+ else
+ year = ord;
+
final_adj = -1;
}
+
absdate = absdate_from_ymd(year, month, 1);
- if (absdate == INT_ERR_CODE) {
- return INT_ERR_CODE;
- }
+
+ if (absdate == INT_ERR_CODE)
+ return INT_ERR_CODE;
+
return absdate + final_adj - ORD_OFFSET;
}
-static npy_int64 asfreq_AtoA(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoA(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_AtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+static i8
+asfreq_AtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+static i8
+asfreq_AtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_AtoQ(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoQ(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_AtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_AtoD(ordinal, relation, af_info), relation,
+ af_info);
+}
-static npy_int64 asfreq_AtoM(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoM(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+static i8
+asfreq_AtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ date_info dinfo;
-static npy_int64 asfreq_AtoW(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoW(asfreq_AtoD(ordinal, relation, af_info), relation, af_info); }
+ i8 atob = asfreq_AtoD(ordinal, relation, af_info) + ORD_OFFSET;
-static npy_int64 asfreq_AtoB(npy_int64 ordinal, char relation, asfreq_info *af_info) {
+ if (dInfoCalc_SetFromAbsDate(&dinfo, atob, GREGORIAN))
+ return INT_ERR_CODE;
- struct date_info dinfo;
- if (dInfoCalc_SetFromAbsDate(&dinfo,
- asfreq_AtoD(ordinal, relation, af_info) + ORD_OFFSET,
- GREGORIAN_CALENDAR)) return INT_ERR_CODE;
+ i8 (*f)(i8 date, i8 day_of_week) = NULL;
+ f = !strcmp(relation, "S") ? DtoB_WeekendToMonday : DtoB_WeekendToFriday;
- if (relation == 'S') { return DtoB_WeekendToMonday(dinfo.absdate, dinfo.day_of_week); }
- else { return DtoB_WeekendToFriday(dinfo.absdate, dinfo.day_of_week); }
+ return f(dinfo.absdate, dinfo.day_of_week);
}
-static npy_int64 asfreq_AtoH(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoH(asfreq_AtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_AtoT(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoT(asfreq_AtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
-static npy_int64 asfreq_AtoS(npy_int64 ordinal, char relation, asfreq_info *af_info)
- { return asfreq_DtoS(asfreq_AtoD(ordinal, relation, af_info), relation, &NULL_AF_INFO); }
+static i8
+asfreq_AtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
-static npy_int64 nofunc(npy_int64 ordinal, char relation, asfreq_info *af_info) { return INT_ERR_CODE; }
-static npy_int64 no_op(npy_int64 ordinal, char relation, asfreq_info *af_info) { return ordinal; }
+static i8
+asfreq_AtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_AtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoS(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+
+static i8
+nofunc(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return INT_ERR_CODE;
+}
+
+static i8
+no_op(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal;
+}
// end of frequency specific conversion routines
-static int get_freq_group(int freq) { return (freq/1000)*1000; }
+static i8
+get_freq_group(i8 freq)
+{
+ return (freq / 1000) * 1000;
+}
-static int calc_a_year_end(int freq, int group) {
- int result = (freq - group) % 12;
- if (result == 0) {return 12;}
- else {return result;}
+static i8
+calc_a_year_end(i8 freq, i8 group)
+{
+ i8 result = (freq - group) % 12;
+ return !result ? 12 : result;
}
-static int calc_week_end(int freq, int group) {
+static i8
+calc_week_end(i8 freq, i8 group)
+{
return freq - group;
}
-void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info) {
- int fromGroup = get_freq_group(fromFreq);
- int toGroup = get_freq_group(toFreq);
+void
+get_asfreq_info(i8 fromFreq, i8 toFreq, asfreq_info *af_info)
+{
+ i8 fromGroup = get_freq_group(fromFreq);
+ i8 toGroup = get_freq_group(toFreq);
switch(fromGroup)
{
- case FR_WK: {
- af_info->from_week_end = calc_week_end(fromFreq, fromGroup);
- } break;
- case FR_ANN: {
- af_info->from_a_year_end = calc_a_year_end(fromFreq, fromGroup);
- } break;
- case FR_QTR: {
- af_info->from_q_year_end = calc_a_year_end(fromFreq, fromGroup);
- } break;
+ case FR_WK:
+ af_info->from_week_end = calc_week_end(fromFreq, fromGroup);
+ break;
+ case FR_ANN:
+ af_info->from_a_year_end = calc_a_year_end(fromFreq, fromGroup);
+ break;
+ case FR_QTR:
+ af_info->from_q_year_end = calc_a_year_end(fromFreq, fromGroup);
+ break;
}
switch(toGroup)
{
- case FR_WK: {
- af_info->to_week_end = calc_week_end(toFreq, toGroup);
- } break;
- case FR_ANN: {
- af_info->to_a_year_end = calc_a_year_end(toFreq, toGroup);
- } break;
- case FR_QTR: {
- af_info->to_q_year_end = calc_a_year_end(toFreq, toGroup);
- } break;
+ case FR_WK:
+ af_info->to_week_end = calc_week_end(toFreq, toGroup);
+ break;
+ case FR_ANN:
+ af_info->to_a_year_end = calc_a_year_end(toFreq, toGroup);
+ break;
+ case FR_QTR:
+ af_info->to_q_year_end = calc_a_year_end(toFreq, toGroup);
+ break;
}
}
-freq_conv_func get_asfreq_func(int fromFreq, int toFreq)
+static i8
+asfreq_UtoS(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return ordinal / US_PER_SECOND;
+}
+
+static i8
+asfreq_UtoD(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_StoD(asfreq_UtoS(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_UtoA(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoA(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoQ(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoQ(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoM(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoM(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoW(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoW(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoB(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoB(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ af_info);
+}
+
+static i8
+asfreq_UtoH(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoH(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO),
+ relation, &NULL_AF_INFO);
+}
+
+static i8
+asfreq_UtoT(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoT(asfreq_UtoD(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+
+static i8
+asfreq_StoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_StoHIGHFREQ(ordinal, relation, US_PER_SECOND);
+}
+
+static i8
+asfreq_AtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_AtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_QtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_QtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_MtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_MtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_WtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
{
- int fromGroup = get_freq_group(fromFreq);
- int toGroup = get_freq_group(toFreq);
+ return asfreq_DtoU(asfreq_WtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_BtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_DtoU(asfreq_BtoD(ordinal, relation, af_info), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_TtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_StoU(asfreq_TtoS(ordinal, relation, &NULL_AF_INFO), relation,
+ &NULL_AF_INFO);
+}
+
+static i8
+asfreq_HtoU(i8 ordinal, const char* relation, asfreq_info *af_info)
+{
+ return asfreq_TtoU(asfreq_HtoT(ordinal, relation, af_info), relation,
+ af_info);
+}
+
+freq_conv_func
+get_asfreq_func(i8 fromFreq, i8 toFreq)
+{
+ i8 fromGroup = get_freq_group(fromFreq);
+ i8 toGroup = get_freq_group(toFreq);
if (fromGroup == FR_UND) { fromGroup = FR_DAY; }
switch(fromGroup)
{
- case FR_ANN:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_AtoA;
- case FR_QTR: return &asfreq_AtoQ;
- case FR_MTH: return &asfreq_AtoM;
- case FR_WK: return &asfreq_AtoW;
- case FR_BUS: return &asfreq_AtoB;
- case FR_DAY: return &asfreq_AtoD;
- case FR_HR: return &asfreq_AtoH;
- case FR_MIN: return &asfreq_AtoT;
- case FR_SEC: return &asfreq_AtoS;
- default: return &nofunc;
- }
-
- case FR_QTR:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_QtoA;
- case FR_QTR: return &asfreq_QtoQ;
- case FR_MTH: return &asfreq_QtoM;
- case FR_WK: return &asfreq_QtoW;
- case FR_BUS: return &asfreq_QtoB;
- case FR_DAY: return &asfreq_QtoD;
- case FR_HR: return &asfreq_QtoH;
- case FR_MIN: return &asfreq_QtoT;
- case FR_SEC: return &asfreq_QtoS;
- default: return &nofunc;
- }
-
- case FR_MTH:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_MtoA;
- case FR_QTR: return &asfreq_MtoQ;
- case FR_MTH: return &no_op;
- case FR_WK: return &asfreq_MtoW;
- case FR_BUS: return &asfreq_MtoB;
- case FR_DAY: return &asfreq_MtoD;
- case FR_HR: return &asfreq_MtoH;
- case FR_MIN: return &asfreq_MtoT;
- case FR_SEC: return &asfreq_MtoS;
- default: return &nofunc;
- }
-
- case FR_WK:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_WtoA;
- case FR_QTR: return &asfreq_WtoQ;
- case FR_MTH: return &asfreq_WtoM;
- case FR_WK: return &asfreq_WtoW;
- case FR_BUS: return &asfreq_WtoB;
- case FR_DAY: return &asfreq_WtoD;
- case FR_HR: return &asfreq_WtoH;
- case FR_MIN: return &asfreq_WtoT;
- case FR_SEC: return &asfreq_WtoS;
- default: return &nofunc;
- }
-
- case FR_BUS:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_BtoA;
- case FR_QTR: return &asfreq_BtoQ;
- case FR_MTH: return &asfreq_BtoM;
- case FR_WK: return &asfreq_BtoW;
- case FR_DAY: return &asfreq_BtoD;
- case FR_BUS: return &no_op;
- case FR_HR: return &asfreq_BtoH;
- case FR_MIN: return &asfreq_BtoT;
- case FR_SEC: return &asfreq_BtoS;
- default: return &nofunc;
- }
-
- case FR_DAY:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_DtoA;
- case FR_QTR: return &asfreq_DtoQ;
- case FR_MTH: return &asfreq_DtoM;
- case FR_WK: return &asfreq_DtoW;
- case FR_BUS: return &asfreq_DtoB;
- case FR_DAY: return &asfreq_DtoD;
- case FR_HR: return &asfreq_DtoH;
- case FR_MIN: return &asfreq_DtoT;
- case FR_SEC: return &asfreq_DtoS;
- default: return &nofunc;
- }
-
- case FR_HR:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_HtoA;
- case FR_QTR: return &asfreq_HtoQ;
- case FR_MTH: return &asfreq_HtoM;
- case FR_WK: return &asfreq_HtoW;
- case FR_BUS: return &asfreq_HtoB;
- case FR_DAY: return &asfreq_HtoD;
- case FR_HR: return &no_op;
- case FR_MIN: return &asfreq_HtoT;
- case FR_SEC: return &asfreq_HtoS;
- default: return &nofunc;
- }
-
- case FR_MIN:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_TtoA;
- case FR_QTR: return &asfreq_TtoQ;
- case FR_MTH: return &asfreq_TtoM;
- case FR_WK: return &asfreq_TtoW;
- case FR_BUS: return &asfreq_TtoB;
- case FR_DAY: return &asfreq_TtoD;
- case FR_HR: return &asfreq_TtoH;
- case FR_MIN: return &no_op;
- case FR_SEC: return &asfreq_TtoS;
- default: return &nofunc;
- }
-
- case FR_SEC:
- switch(toGroup)
- {
- case FR_ANN: return &asfreq_StoA;
- case FR_QTR: return &asfreq_StoQ;
- case FR_MTH: return &asfreq_StoM;
- case FR_WK: return &asfreq_StoW;
- case FR_BUS: return &asfreq_StoB;
- case FR_DAY: return &asfreq_StoD;
- case FR_HR: return &asfreq_StoH;
- case FR_MIN: return &asfreq_StoT;
- case FR_SEC: return &no_op;
- default: return &nofunc;
- }
+ case FR_ANN:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_AtoA;
+ case FR_QTR: return &asfreq_AtoQ;
+ case FR_MTH: return &asfreq_AtoM;
+ case FR_WK: return &asfreq_AtoW;
+ case FR_BUS: return &asfreq_AtoB;
+ case FR_DAY: return &asfreq_AtoD;
+ case FR_HR: return &asfreq_AtoH;
+ case FR_MIN: return &asfreq_AtoT;
+ case FR_SEC: return &asfreq_AtoS;
+ case FR_USEC: return &asfreq_AtoU;
+ default: return &nofunc;
+ }
+
+ case FR_QTR:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_QtoA;
+ case FR_QTR: return &asfreq_QtoQ;
+ case FR_MTH: return &asfreq_QtoM;
+ case FR_WK: return &asfreq_QtoW;
+ case FR_BUS: return &asfreq_QtoB;
+ case FR_DAY: return &asfreq_QtoD;
+ case FR_HR: return &asfreq_QtoH;
+ case FR_MIN: return &asfreq_QtoT;
+ case FR_SEC: return &asfreq_QtoS;
+ case FR_USEC: return &asfreq_QtoU;
default: return &nofunc;
+ }
+
+ case FR_MTH:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_MtoA;
+ case FR_QTR: return &asfreq_MtoQ;
+ case FR_MTH: return &no_op;
+ case FR_WK: return &asfreq_MtoW;
+ case FR_BUS: return &asfreq_MtoB;
+ case FR_DAY: return &asfreq_MtoD;
+ case FR_HR: return &asfreq_MtoH;
+ case FR_MIN: return &asfreq_MtoT;
+ case FR_SEC: return &asfreq_MtoS;
+ case FR_USEC: return &asfreq_MtoU;
+ default: return &nofunc;
+ }
+
+ case FR_WK:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_WtoA;
+ case FR_QTR: return &asfreq_WtoQ;
+ case FR_MTH: return &asfreq_WtoM;
+ case FR_WK: return &asfreq_WtoW;
+ case FR_BUS: return &asfreq_WtoB;
+ case FR_DAY: return &asfreq_WtoD;
+ case FR_HR: return &asfreq_WtoH;
+ case FR_MIN: return &asfreq_WtoT;
+ case FR_SEC: return &asfreq_WtoS;
+ case FR_USEC: return &asfreq_WtoU;
+ default: return &nofunc;
+ }
+
+ case FR_BUS:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_BtoA;
+ case FR_QTR: return &asfreq_BtoQ;
+ case FR_MTH: return &asfreq_BtoM;
+ case FR_WK: return &asfreq_BtoW;
+ case FR_DAY: return &asfreq_BtoD;
+ case FR_BUS: return &no_op;
+ case FR_HR: return &asfreq_BtoH;
+ case FR_MIN: return &asfreq_BtoT;
+ case FR_SEC: return &asfreq_BtoS;
+ case FR_USEC: return &asfreq_BtoU;
+ default: return &nofunc;
+ }
+
+ case FR_DAY:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_DtoA;
+ case FR_QTR: return &asfreq_DtoQ;
+ case FR_MTH: return &asfreq_DtoM;
+ case FR_WK: return &asfreq_DtoW;
+ case FR_BUS: return &asfreq_DtoB;
+ case FR_DAY: return &asfreq_DtoD;
+ case FR_HR: return &asfreq_DtoH;
+ case FR_MIN: return &asfreq_DtoT;
+ case FR_SEC: return &asfreq_DtoS;
+ case FR_USEC: return &asfreq_DtoU;
+ default: return &nofunc;
+ }
+
+ case FR_HR:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_HtoA;
+ case FR_QTR: return &asfreq_HtoQ;
+ case FR_MTH: return &asfreq_HtoM;
+ case FR_WK: return &asfreq_HtoW;
+ case FR_BUS: return &asfreq_HtoB;
+ case FR_DAY: return &asfreq_HtoD;
+ case FR_HR: return &no_op;
+ case FR_MIN: return &asfreq_HtoT;
+ case FR_SEC: return &asfreq_HtoS;
+ case FR_USEC: return &asfreq_HtoU;
+ default: return &nofunc;
+ }
+
+ case FR_MIN:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_TtoA;
+ case FR_QTR: return &asfreq_TtoQ;
+ case FR_MTH: return &asfreq_TtoM;
+ case FR_WK: return &asfreq_TtoW;
+ case FR_BUS: return &asfreq_TtoB;
+ case FR_DAY: return &asfreq_TtoD;
+ case FR_HR: return &asfreq_TtoH;
+ case FR_MIN: return &no_op;
+ case FR_SEC: return &asfreq_TtoS;
+ case FR_USEC: return &asfreq_TtoU;
+ default: return &nofunc;
+ }
+
+ case FR_SEC:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_StoA;
+ case FR_QTR: return &asfreq_StoQ;
+ case FR_MTH: return &asfreq_StoM;
+ case FR_WK: return &asfreq_StoW;
+ case FR_BUS: return &asfreq_StoB;
+ case FR_DAY: return &asfreq_StoD;
+ case FR_HR: return &asfreq_StoH;
+ case FR_MIN: return &asfreq_StoT;
+ case FR_SEC: return &no_op;
+ case FR_USEC: return &asfreq_StoU;
+ default: return &nofunc;
+ }
+
+ case FR_USEC:
+ switch(toGroup)
+ {
+ case FR_ANN: return &asfreq_UtoA;
+ case FR_QTR: return &asfreq_UtoQ;
+ case FR_MTH: return &asfreq_UtoM;
+ case FR_WK: return &asfreq_UtoW;
+ case FR_BUS: return &asfreq_UtoB;
+ case FR_DAY: return &asfreq_UtoD;
+ case FR_HR: return &asfreq_UtoH;
+ case FR_MIN: return &asfreq_UtoT;
+ case FR_SEC: return &asfreq_UtoS;
+ case FR_USEC: return &no_op;
+ default: return &nofunc;
+ }
+
+ default: return &nofunc;
}
}
-double get_abs_time(int freq, npy_int64 daily_ord, npy_int64 ordinal) {
+static i8
+get_abs_time(i8 freq, i8 daily_ord, i8 ordinal)
+{
+
+ i8 start_ord, per_day, unit;
- npy_int64 start_ord, per_day, unit;
- switch(freq)
+ switch (freq)
{
- case FR_HR:
- per_day = 24;
- unit = 60 * 60;
- break;
- case FR_MIN:
- per_day = 24*60;
- unit = 60;
- break;
- case FR_SEC:
- per_day = 24*60*60;
- unit = 1;
- break;
- default:
- return 0; // 24*60*60 - 1;
+ case FR_HR:
+ per_day = 24L;
+ unit = US_PER_HOUR;
+ break;
+
+ case FR_MIN:
+ per_day = 24L * 60L;
+ unit = US_PER_MINUTE;
+ break;
+
+ case FR_SEC:
+ per_day = 24L * 60L * 60L;
+ unit = US_PER_SECOND;
+ break;
+
+ case FR_USEC:
+ per_day = US_PER_DAY;
+ unit = 1L;
+ break;
+
+ default:
+ return 0L;
}
- start_ord = asfreq_DtoHIGHFREQ(daily_ord, 'S', per_day);
- /* printf("start_ord: %d\n", start_ord); */
- return (double) ( unit * (ordinal - start_ord));
- /* if (ordinal >= 0) { */
- /* } */
- /* else { */
- /* return (double) (unit * mod_compat(ordinal - start_ord, per_day)); */
- /* } */
+ start_ord = asfreq_DtoHIGHFREQ(daily_ord, "S", per_day);
+ return unit * (ordinal - start_ord);
}
/* Sets the time part of the DateTime object. */
-static
-int dInfoCalc_SetFromAbsTime(struct date_info *dinfo,
- double abstime)
+static i8
+dInfoCalc_SetFromAbsTime(date_info *dinfo, i8 abstime)
{
- int inttime;
- int hour,minute;
- double second;
-
- inttime = (int)abstime;
- hour = inttime / 3600;
- minute = (inttime % 3600) / 60;
- second = abstime - (double)(hour*3600 + minute*60);
+ i8 hour = abstime / US_PER_HOUR;
+ i8 minute = (abstime % US_PER_HOUR) / US_PER_MINUTE;
+ i8 second = (abstime - (hour * US_PER_HOUR + minute * US_PER_MINUTE)) /
+ US_PER_SECOND;
+ i8 microsecond = abstime - (hour * US_PER_HOUR + minute * US_PER_MINUTE +
+ second * US_PER_SECOND);
dinfo->hour = hour;
dinfo->minute = minute;
dinfo->second = second;
+ dinfo->microsecond = microsecond;
dinfo->abstime = abstime;
@@ -971,29 +1502,30 @@ int dInfoCalc_SetFromAbsTime(struct date_info *dinfo,
}
/* Set the instance's value using the given date and time. calendar
- may be set to the flags: GREGORIAN_CALENDAR, JULIAN_CALENDAR to
+ may be set to the flags: GREGORIAN, JULIAN to
indicate the calendar to be used. */
-static
-int dInfoCalc_SetFromAbsDateTime(struct date_info *dinfo,
- npy_int64 absdate,
- double abstime,
- int calendar)
+static i8
+dInfoCalc_SetFromAbsDateTime(date_info *dinfo, i8 absdate, i8 abstime,
+ enum CalendarType calendar)
{
/* Bounds check */
- Py_AssertWithArg(abstime >= 0.0 && abstime <= SECONDS_PER_DAY,
- PyExc_ValueError,
- "abstime out of range (0.0 - 86400.0): %f",
- abstime);
+ Py_AssertWithArg(abstime >= 0 && abstime <= US_PER_DAY,
+ PyExc_ValueError,
+ "abstime out of range (0, 86400000000): %li",
+ abstime);
/* Calculate the date */
- if (dInfoCalc_SetFromAbsDate(dinfo, absdate, calendar)) goto onError;
+ if (dInfoCalc_SetFromAbsDate(dinfo, absdate, calendar))
+ goto onError;
/* Calculate the time */
- if (dInfoCalc_SetFromAbsTime(dinfo, abstime)) goto onError;
+ if (dInfoCalc_SetFromAbsTime(dinfo, abstime))
+ goto onError;
return 0;
- onError:
+
+onError:
return INT_ERR_CODE;
}
@@ -1001,48 +1533,59 @@ int dInfoCalc_SetFromAbsDateTime(struct date_info *dinfo,
* New pandas API-helper code, to expose to cython
* ------------------------------------------------------------------*/
-npy_int64 asfreq(npy_int64 period_ordinal, int freq1, int freq2, char relation)
+i8
+asfreq(i8 period_ordinal, i8 freq1, i8 freq2, const char* relation)
{
- npy_int64 val;
- freq_conv_func func;
- asfreq_info finfo;
+ freq_conv_func func = get_asfreq_func(freq1, freq2);
- func = get_asfreq_func(freq1, freq2);
+ asfreq_info finfo;
get_asfreq_info(freq1, freq2, &finfo);
- val = (*func)(period_ordinal, relation, &finfo);
+ i8 val = func(period_ordinal, relation, &finfo);
if (val == INT_ERR_CODE) {
- // Py_Error(PyExc_ValueError, "Unable to convert to desired frequency.");
goto onError;
}
+
return val;
+
onError:
return INT_ERR_CODE;
}
/* generate an ordinal in period space */
-npy_int64 get_period_ordinal(int year, int month, int day,
- int hour, int minute, int second,
- int freq)
+i8
+get_period_ordinal(i8 year, i8 month, i8 day, i8 hour, i8 minute, i8 second,
+ i8 microsecond, i8 freq)
{
- npy_int64 absdays, delta;
- npy_int64 weeks, days;
- npy_int64 ordinal, day_adj;
- int freq_group, fmonth, mdiff;
- freq_group = get_freq_group(freq);
+ i8 absdays, delta, weeks, days, ordinal, day_adj, fmonth, mdiff;
+ i8 freq_group = get_freq_group(freq);
+
+ if (freq == FR_USEC) {
+ absdays = absdate_from_ymd(year, month, day);
+ delta = absdays - ORD_OFFSET;
+ return delta * US_PER_DAY + hour * US_PER_HOUR +
+ minute * US_PER_MINUTE + second * US_PER_SECOND + microsecond;
+ }
+
+ if (freq == FR_USEC) {
+ absdays = absdate_from_ymd(year, month, day);
+ delta = absdays - ORD_OFFSET;
+ return delta * US_PER_DAY + hour * US_PER_HOUR +
+ minute * US_PER_MINUTE + second * US_PER_SECOND + microsecond;
+ }
if (freq == FR_SEC) {
absdays = absdate_from_ymd(year, month, day);
delta = (absdays - ORD_OFFSET);
- return (npy_int64)(delta*86400 + hour*3600 + minute*60 + second);
+ return delta*86400 + hour*3600 + minute*60 + second;
}
if (freq == FR_MIN) {
absdays = absdate_from_ymd(year, month, day);
delta = (absdays - ORD_OFFSET);
- return (npy_int64)(delta*1440 + hour*60 + minute);
+ return delta*1440 + hour*60 + minute;
}
if (freq == FR_HR) {
@@ -1051,32 +1594,32 @@ npy_int64 get_period_ordinal(int year, int month, int day,
goto onError;
}
delta = (absdays - ORD_OFFSET);
- return (npy_int64)(delta*24 + hour);
+ return delta*24 + hour;
}
if (freq == FR_DAY)
{
- return (npy_int64) (absdate_from_ymd(year, month, day) - ORD_OFFSET);
+ return absdate_from_ymd(year, month, day) - ORD_OFFSET;
}
if (freq == FR_UND)
{
- return (npy_int64) (absdate_from_ymd(year, month, day) - ORD_OFFSET);
+ return absdate_from_ymd(year, month, day) - ORD_OFFSET;
}
if (freq == FR_BUS)
{
- if((days = absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
+ if ((days = absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
{
goto onError;
}
weeks = days / 7;
- return (npy_int64)(days - weeks * 2) - BDAY_OFFSET;
+ return (days - weeks * 2) - BDAY_OFFSET;
}
if (freq_group == FR_WK)
{
- if((ordinal = (npy_int64)absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
+ if ((ordinal = absdate_from_ymd(year, month, day)) == INT_ERR_CODE)
{
goto onError;
}
@@ -1091,26 +1634,26 @@ npy_int64 get_period_ordinal(int year, int month, int day,
if (freq_group == FR_QTR)
{
- fmonth = freq - FR_QTR;
- if (fmonth == 0) fmonth = 12;
+ fmonth = freq - FR_QTR;
+ if (fmonth == 0) fmonth = 12;
- mdiff = month - fmonth;
- if (mdiff < 0) mdiff += 12;
- if (month >= fmonth) mdiff += 12;
+ mdiff = month - fmonth;
+ if (mdiff < 0) mdiff += 12;
+ if (month >= fmonth) mdiff += 12;
- return (year - BASE_YEAR) * 4 + (mdiff - 1) / 3;
+ return (year - BASE_YEAR) * 4 + (mdiff - 1) / 3;
}
if (freq_group == FR_ANN)
{
- fmonth = freq - FR_ANN;
- if (fmonth == 0) fmonth = 12;
- if (month <= fmonth) {
- return year - BASE_YEAR;
- }
- else {
- return year - BASE_YEAR + 1;
- }
+ fmonth = freq - FR_ANN;
+ if (fmonth == 0) fmonth = 12;
+ if (month <= fmonth) {
+ return year - BASE_YEAR;
+ }
+ else {
+ return year - BASE_YEAR + 1;
+ }
}
Py_Error(PyExc_RuntimeError, "Unable to generate frequency ordinal");
@@ -1120,40 +1663,46 @@ npy_int64 get_period_ordinal(int year, int month, int day,
}
/*
- Returns the proleptic Gregorian ordinal of the date, as an integer.
- This corresponds to the number of days since Jan., 1st, 1AD.
- When the instance has a frequency less than daily, the proleptic date
- is calculated for the last day of the period.
+ Returns the proleptic Gregorian ordinal of the date, as an i8eger.
+ This corresponds to the number of days since Jan., 1st, 1AD.
+ When the instance has a frequency less than daily, the proleptic date
+ is calculated for the last day of the period.
*/
-npy_int64 get_python_ordinal(npy_int64 period_ordinal, int freq)
+i8
+get_python_ordinal(i8 period_ordinal, i8 freq)
{
- asfreq_info af_info;
- npy_int64 (*toDaily)(npy_int64, char, asfreq_info*);
-
if (freq == FR_DAY)
return period_ordinal + ORD_OFFSET;
- toDaily = get_asfreq_func(freq, FR_DAY);
+ i8 (*toDaily)(i8, const char*, asfreq_info*) = get_asfreq_func(freq,
+ FR_DAY);
+ asfreq_info af_info;
+
get_asfreq_info(freq, FR_DAY, &af_info);
- return toDaily(period_ordinal, 'E', &af_info) + ORD_OFFSET;
+
+ return toDaily(period_ordinal, "E", &af_info) + ORD_OFFSET;
}
-char *str_replace(const char *s, const char *old, const char *new) {
- char *ret;
- int i, count = 0;
+char*
+str_replace(const char *s, const char *old, const char *new)
+{
+ char *ret = NULL;
+ i8 i, count = 0;
size_t newlen = strlen(new);
size_t oldlen = strlen(old);
for (i = 0; s[i] != '\0'; i++) {
if (strstr(&s[i], old) == &s[i]) {
- count++;
- i += oldlen - 1;
+ ++count;
+ i += oldlen - 1;
}
}
ret = PyArray_malloc(i + 1 + count * (newlen - oldlen));
- if (ret == NULL) {return (char *)PyErr_NoMemory();}
+ if (ret == NULL)
+ return (char*) PyErr_NoMemory();
+
i = 0;
while (*s) {
@@ -1165,6 +1714,7 @@ char *str_replace(const char *s, const char *old, const char *new) {
ret[i++] = *s++;
}
}
+
ret[i] = '\0';
return ret;
@@ -1173,84 +1723,86 @@ char *str_replace(const char *s, const char *old, const char *new) {
// function to generate a nice string representation of the period
// object, originally from DateObject_strftime
-char* c_strftime(struct date_info *tmp, char *fmt) {
+char*
+c_strftime(date_info *tmp, char *fmt)
+{
struct tm c_date;
- char* result;
- struct date_info dinfo = *tmp;
- int result_len = strlen(fmt) + 50;
+ date_info dinfo = *tmp;
- c_date.tm_sec = (int)dinfo.second;
+ size_t result_len = strlen(fmt) + 50L;
+
+ c_date.tm_sec = dinfo.second;
c_date.tm_min = dinfo.minute;
c_date.tm_hour = dinfo.hour;
c_date.tm_mday = dinfo.day;
- c_date.tm_mon = dinfo.month - 1;
- c_date.tm_year = dinfo.year - 1900;
- c_date.tm_wday = (dinfo.day_of_week + 1) % 7;
- c_date.tm_yday = dinfo.day_of_year - 1;
- c_date.tm_isdst = -1;
-
- result = malloc(result_len * sizeof(char));
-
+ c_date.tm_mon = dinfo.month - 1L;
+ c_date.tm_year = dinfo.year - 1900L;
+ c_date.tm_wday = (dinfo.day_of_week + 1L) % 7L;
+ c_date.tm_yday = dinfo.day_of_year - 1L;
+ c_date.tm_isdst = -1L;
+
+ // this must be freed by the caller (right now this is in cython)
+ char* result = malloc(result_len * sizeof(char));
strftime(result, result_len, fmt, &c_date);
-
return result;
}
-int get_yq(npy_int64 ordinal, int freq, int *quarter, int *year) {
+
+i8
+get_yq(i8 ordinal, i8 freq, i8 *quarter, i8 *year)
+{
asfreq_info af_info;
- int qtr_freq;
- npy_int64 daily_ord;
- npy_int64 (*toDaily)(npy_int64, char, asfreq_info*) = NULL;
+ i8 qtr_freq, daily_ord;
+ i8 (*toDaily)(i8, const char*, asfreq_info*) = get_asfreq_func(freq, FR_DAY);
- toDaily = get_asfreq_func(freq, FR_DAY);
get_asfreq_info(freq, FR_DAY, &af_info);
- daily_ord = toDaily(ordinal, 'E', &af_info);
+ daily_ord = toDaily(ordinal, "E", &af_info);
+
+ qtr_freq = get_freq_group(freq) == FR_QTR ? freq : FR_QTR;
- if (get_freq_group(freq) == FR_QTR) {
- qtr_freq = freq;
- } else { qtr_freq = FR_QTR; }
get_asfreq_info(FR_DAY, qtr_freq, &af_info);
- if(DtoQ_yq(daily_ord, &af_info, year, quarter) == INT_ERR_CODE)
+ if (DtoQ_yq(daily_ord, &af_info, year, quarter) == INT_ERR_CODE)
return -1;
return 0;
}
-
-
-
-static int _quarter_year(npy_int64 ordinal, int freq, int *year, int *quarter) {
+static i8
+_quarter_year(i8 ordinal, i8 freq, i8 *year, i8 *quarter)
+{
asfreq_info af_info;
- int qtr_freq;
-
- ordinal = get_python_ordinal(ordinal, freq) - ORD_OFFSET;
- if (get_freq_group(freq) == FR_QTR)
- qtr_freq = freq;
- else
- qtr_freq = FR_QTR;
+ i8 ord = get_python_ordinal(ordinal, freq) - ORD_OFFSET;
+ i8 qtr_freq = get_freq_group(freq) == FR_QTR ? freq : FR_QTR;
get_asfreq_info(FR_DAY, qtr_freq, &af_info);
- if (DtoQ_yq(ordinal, &af_info, year, quarter) == INT_ERR_CODE)
+ if (DtoQ_yq(ord, &af_info, year, quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
- if ((qtr_freq % 1000) > 12)
+ if (qtr_freq % 1000 > 12)
*year -= 1;
return 0;
}
-static int _ISOWeek(struct date_info *dinfo)
+
+static i8
+_ISOWeek(date_info *dinfo)
{
- int week;
+ i8 week;
/* Estimate */
- week = (dinfo->day_of_year-1) - dinfo->day_of_week + 3;
- if (week >= 0) week = week / 7 + 1;
+ week = (dinfo->day_of_year - 1) - dinfo->day_of_week + 3;
+
+ if (week >= 0) {
+ week /= 7;
+ ++week;
+ }
+
/* Verify */
if (week < 0) {
@@ -1261,8 +1813,8 @@ static int _ISOWeek(struct date_info *dinfo)
else
week = 52;
} else if (week == 53) {
- /* Check if the week belongs to year or year+1 */
- if (31-dinfo->day + dinfo->day_of_week < 3) {
+ /* Check if the week belongs to year or year+1 */
+ if (31 - dinfo->day + dinfo->day_of_week < 3) {
week = 1;
}
}
@@ -1270,102 +1822,176 @@ static int _ISOWeek(struct date_info *dinfo)
return week;
}
-int get_date_info(npy_int64 ordinal, int freq, struct date_info *dinfo)
+
+i8
+get_date_info(i8 ordinal, i8 freq, date_info *dinfo)
{
- npy_int64 absdate = get_python_ordinal(ordinal, freq);
- /* printf("freq: %d, absdate: %d\n", freq, (int) absdate); */
- double abstime = get_abs_time(freq, absdate - ORD_OFFSET, ordinal);
- if (abstime < 0) {
- abstime += 86400;
- absdate -= 1;
- }
+ i8 absdate = get_python_ordinal(ordinal, freq);
+ i8 abstime = get_abs_time(freq, absdate - ORD_OFFSET, ordinal);
+
+ if (abstime < 0) {
+ // add a day's worth of us to time
+ abstime += US_PER_DAY;
+
+ // subtract a day from the date
+ --absdate;
+ }
- if(dInfoCalc_SetFromAbsDateTime(dinfo, absdate,
- abstime, GREGORIAN_CALENDAR))
+ if (dInfoCalc_SetFromAbsDateTime(dinfo, absdate, abstime, GREGORIAN))
return INT_ERR_CODE;
return 0;
}
-int pyear(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
+i8
+pyear(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
get_date_info(ordinal, freq, &dinfo);
return dinfo.year;
}
-int pqyear(npy_int64 ordinal, int freq) {
- int year, quarter;
- if( _quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
+i8
+pqyear(i8 ordinal, i8 freq)
+{
+ i8 year, quarter;
+ if (_quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
return year;
}
-int pquarter(npy_int64 ordinal, int freq) {
- int year, quarter;
- if(_quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
+i8
+pquarter(i8 ordinal, i8 freq)
+{
+ i8 year, quarter;
+ if (_quarter_year(ordinal, freq, &year, &quarter) == INT_ERR_CODE)
return INT_ERR_CODE;
return quarter;
}
-int pmonth(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pmonth(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.month;
}
-int pday(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pday(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day;
}
-int pweekday(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pweekday(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day_of_week;
}
-int pday_of_week(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pday_of_week(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day_of_week;
}
-int pday_of_year(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pday_of_year(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.day_of_year;
}
-int pweek(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pweek(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return _ISOWeek(&dinfo);
}
-int phour(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+phour(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.hour;
}
-int pminute(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+pminute(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
return dinfo.minute;
}
-int psecond(npy_int64 ordinal, int freq) {
- struct date_info dinfo;
- if(get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+i8
+psecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.second;
+}
+
+i8
+pmicrosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.microsecond;
+}
+
+i8
+pnanosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.nanosecond;
+}
+
+
+i8
+ppicosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.picosecond;
+}
+
+i8
+pfemtosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
+ return INT_ERR_CODE;
+ return dinfo.femtosecond;
+}
+
+i8
+pattosecond(i8 ordinal, i8 freq)
+{
+ date_info dinfo;
+ if (get_date_info(ordinal, freq, &dinfo) == INT_ERR_CODE)
return INT_ERR_CODE;
- return (int)dinfo.second;
+ return dinfo.attosecond;
}
diff --git a/pandas/src/period.h b/pandas/src/period.h
index 4ba92bf8fde41..867daf597dbca 100644
--- a/pandas/src/period.h
+++ b/pandas/src/period.h
@@ -12,148 +12,215 @@
#include "headers/stdint.h"
#include "limits.h"
+
/*
* declarations from period here
*/
-#define GREGORIAN_CALENDAR 0
-#define JULIAN_CALENDAR 1
+typedef int64_t i8;
+
+enum CalendarType
+{
+ GREGORIAN,
+ JULIAN
+};
+
+static const i8 SECONDS_PER_DAY = 86400;
-#define SECONDS_PER_DAY ((double) 86400.0)
+#define Py_AssertWithArg(x, errortype, errorstr, a1) \
+ { \
+ if (!((x))) { \
+ PyErr_Format((errortype), (errorstr), (a1)); \
+ goto onError; \
+ } \
+ }
+
+#define Py_Error(errortype, errorstr) \
+ { \
+ PyErr_SetString((errortype), (errorstr)); \
+ goto onError; \
+ }
-#define Py_AssertWithArg(x,errortype,errorstr,a1) {if (!(x)) {PyErr_Format(errortype,errorstr,a1);goto onError;}}
-#define Py_Error(errortype,errorstr) {PyErr_SetString(errortype,errorstr);goto onError;}
/*** FREQUENCY CONSTANTS ***/
// HIGHFREQ_ORIG is the datetime ordinal from which to begin the second
// frequency ordinal sequence
-// typedef int64_t npy_int64;
+// typedef int64_t i8;
// begins second ordinal at 1/1/1970 unix epoch
// #define HIGHFREQ_ORIG 62135683200LL
-#define BASE_YEAR 1970
-#define ORD_OFFSET 719163LL // days until 1970-01-01
-#define BDAY_OFFSET 513689LL // days until 1970-01-01
-#define WEEK_OFFSET 102737LL
-#define HIGHFREQ_ORIG 0 // ORD_OFFSET * 86400LL // days until 1970-01-01
-
-#define FR_ANN 1000 /* Annual */
-#define FR_ANNDEC FR_ANN /* Annual - December year end*/
-#define FR_ANNJAN 1001 /* Annual - January year end*/
-#define FR_ANNFEB 1002 /* Annual - February year end*/
-#define FR_ANNMAR 1003 /* Annual - March year end*/
-#define FR_ANNAPR 1004 /* Annual - April year end*/
-#define FR_ANNMAY 1005 /* Annual - May year end*/
-#define FR_ANNJUN 1006 /* Annual - June year end*/
-#define FR_ANNJUL 1007 /* Annual - July year end*/
-#define FR_ANNAUG 1008 /* Annual - August year end*/
-#define FR_ANNSEP 1009 /* Annual - September year end*/
-#define FR_ANNOCT 1010 /* Annual - October year end*/
-#define FR_ANNNOV 1011 /* Annual - November year end*/
+static const i8 BASE_YEAR = 1970;
+static const i8 ORD_OFFSET = 719163; // days until 1970-01-01
+static const i8 BDAY_OFFSET = 513689; // days until 1970-01-01
+static const i8 WEEK_OFFSET = 102737;
+static const i8 HIGHFREQ_ORIG = 0; // ORD_OFFSET * 86400LL // days until 1970-01-01
+
+enum Annual
+{
+ FR_ANN = 1000, /* Annual */
+ FR_ANNDEC = FR_ANN, /* Annual - December year end*/
+ FR_ANNJAN, /* Annual - January year end*/
+ FR_ANNFEB, /* Annual - February year end*/
+ FR_ANNMAR, /* Annual - March year end*/
+ FR_ANNAPR, /* Annual - April year end*/
+ FR_ANNMAY, /* Annual - May year end*/
+ FR_ANNJUN, /* Annual - June year end*/
+ FR_ANNJUL, /* Annual - July year end*/
+ FR_ANNAUG, /* Annual - August year end*/
+ FR_ANNSEP, /* Annual - September year end*/
+ FR_ANNOCT, /* Annual - October year end*/
+ FR_ANNNOV /* Annual - November year end*/
+};
+
/* The standard quarterly frequencies with various fiscal year ends
eg, Q42005 for Q@OCT runs Aug 1, 2005 to Oct 31, 2005 */
-#define FR_QTR 2000 /* Quarterly - December year end (default quarterly) */
-#define FR_QTRDEC FR_QTR /* Quarterly - December year end */
-#define FR_QTRJAN 2001 /* Quarterly - January year end */
-#define FR_QTRFEB 2002 /* Quarterly - February year end */
-#define FR_QTRMAR 2003 /* Quarterly - March year end */
-#define FR_QTRAPR 2004 /* Quarterly - April year end */
-#define FR_QTRMAY 2005 /* Quarterly - May year end */
-#define FR_QTRJUN 2006 /* Quarterly - June year end */
-#define FR_QTRJUL 2007 /* Quarterly - July year end */
-#define FR_QTRAUG 2008 /* Quarterly - August year end */
-#define FR_QTRSEP 2009 /* Quarterly - September year end */
-#define FR_QTROCT 2010 /* Quarterly - October year end */
-#define FR_QTRNOV 2011 /* Quarterly - November year end */
-
-#define FR_MTH 3000 /* Monthly */
-
-#define FR_WK 4000 /* Weekly */
-#define FR_WKSUN FR_WK /* Weekly - Sunday end of week */
-#define FR_WKMON 4001 /* Weekly - Monday end of week */
-#define FR_WKTUE 4002 /* Weekly - Tuesday end of week */
-#define FR_WKWED 4003 /* Weekly - Wednesday end of week */
-#define FR_WKTHU 4004 /* Weekly - Thursday end of week */
-#define FR_WKFRI 4005 /* Weekly - Friday end of week */
-#define FR_WKSAT 4006 /* Weekly - Saturday end of week */
-
-#define FR_BUS 5000 /* Business days */
-#define FR_DAY 6000 /* Daily */
-#define FR_HR 7000 /* Hourly */
-#define FR_MIN 8000 /* Minutely */
-#define FR_SEC 9000 /* Secondly */
-
-#define FR_UND -10000 /* Undefined */
-
-#define INT_ERR_CODE INT32_MIN
-
-#define MEM_CHECK(item) if (item == NULL) { return PyErr_NoMemory(); }
-#define ERR_CHECK(item) if (item == NULL) { return NULL; }
-
-typedef struct asfreq_info {
- int from_week_end; // day the week ends on in the "from" frequency
- int to_week_end; // day the week ends on in the "to" frequency
-
- int from_a_year_end; // month the year ends on in the "from" frequency
- int to_a_year_end; // month the year ends on in the "to" frequency
-
- int from_q_year_end; // month the year ends on in the "from" frequency
- int to_q_year_end; // month the year ends on in the "to" frequency
+enum Quarterly
+{
+ FR_QTR = 2000, /* Quarterly - December year end (default quarterly) */
+ FR_QTRDEC = FR_QTR, /* Quarterly - December year end */
+ FR_QTRJAN, /* Quarterly - January year end */
+ FR_QTRFEB, /* Quarterly - February year end */
+ FR_QTRMAR, /* Quarterly - March year end */
+ FR_QTRAPR, /* Quarterly - April year end */
+ FR_QTRMAY, /* Quarterly - May year end */
+ FR_QTRJUN, /* Quarterly - June year end */
+ FR_QTRJUL, /* Quarterly - July year end */
+ FR_QTRAUG, /* Quarterly - August year end */
+ FR_QTRSEP, /* Quarterly - September year end */
+ FR_QTROCT, /* Quarterly - October year end */
+ FR_QTRNOV /* Quarterly - November year end */
+};
+
+/* #define FR_MTH 3000 /\* Monthly *\/ */
+
+enum Monthly
+{
+ FR_MTH = 3000
+};
+
+
+enum Weekly
+{
+ FR_WK = 4000, /* Weekly */
+ FR_WKSUN = FR_WK, /* Weekly - Sunday end of week */
+ FR_WKMON, /* Weekly - Monday end of week */
+ FR_WKTUE, /* Weekly - Tuesday end of week */
+ FR_WKWED, /* Weekly - Wednesday end of week */
+ FR_WKTHU, /* Weekly - Thursday end of week */
+ FR_WKFRI, /* Weekly - Friday end of week */
+ FR_WKSAT /* Weekly - Saturday end of week */
+};
+
+enum BusinessDaily { FR_BUS = 5000 };
+enum Daily { FR_DAY = 6000 };
+enum Hourly { FR_HR = 7000 };
+enum Minutely { FR_MIN = 8000 };
+enum Secondly { FR_SEC = 9000 };
+enum Microsecondly { FR_USEC = 10000 };
+
+enum Undefined { FR_UND = -10000 };
+
+static const i8 US_PER_SECOND = 1000000L;
+static const i8 US_PER_MINUTE = 60 * 1000000L;
+static const i8 US_PER_HOUR = 60 * 60 * 1000000L;
+static const i8 US_PER_DAY = 24 * 60 * 60 * 1000000L;
+static const i8 US_PER_WEEK = 7 * 24 * 60 * 60 * 1000000L;
+
+static const i8 NS_PER_SECOND = 1000000000L;
+static const i8 NS_PER_MINUTE = 60 * 1000000000L;
+static const i8 NS_PER_HOUR = 60 * 60 * 1000000000L;
+static const i8 NS_PER_DAY = 24 * 60 * 60 * 1000000000L;
+static const i8 NS_PER_WEEK = 7 * 24 * 60 * 60 * 1000000000L;
+
+// make sure INT64_MIN is a macro!
+static const i8 INT_ERR_CODE = INT64_MIN;
+
+#define MEM_CHECK(item) \
+ { \
+ if (item == NULL) \
+ return PyErr_NoMemory(); \
+ }
+
+#define ERR_CHECK(item) \
+ { \
+ if (item == NULL) \
+ return NULL; \
+ }
+
+typedef struct asfreq_info
+{
+ i8 from_week_end; // day the week ends on in the "from" frequency
+ i8 to_week_end; // day the week ends on in the "to" frequency
+
+ i8 from_a_year_end; // month the year ends on in the "from" frequency
+ i8 to_a_year_end; // month the year ends on in the "to" frequency
+
+ i8 from_q_year_end; // month the year ends on in the "from" frequency
+ i8 to_q_year_end; // month the year ends on in the "to" frequency
} asfreq_info;
-typedef struct date_info {
- npy_int64 absdate;
- double abstime;
-
- double second;
- int minute;
- int hour;
- int day;
- int month;
- int quarter;
- int year;
- int day_of_week;
- int day_of_year;
- int calendar;
+typedef struct date_info
+{
+ i8 absdate;
+ i8 abstime;
+
+ i8 attosecond;
+ i8 femtosecond;
+ i8 picosecond;
+ i8 nanosecond;
+ i8 microsecond;
+ i8 second;
+ i8 minute;
+ i8 hour;
+ i8 day;
+ i8 month;
+ i8 quarter;
+ i8 year;
+ i8 day_of_week;
+ i8 day_of_year;
+ i8 calendar;
} date_info;
-typedef npy_int64 (*freq_conv_func)(npy_int64, char, asfreq_info*);
+typedef i8 (*freq_conv_func)(i8, const char*, asfreq_info*);
/*
* new pandas API helper functions here
*/
-npy_int64 asfreq(npy_int64 period_ordinal, int freq1, int freq2, char relation);
-
-npy_int64 get_period_ordinal(int year, int month, int day,
- int hour, int minute, int second,
- int freq);
-
-npy_int64 get_python_ordinal(npy_int64 period_ordinal, int freq);
-
-int get_date_info(npy_int64 ordinal, int freq, struct date_info *dinfo);
-freq_conv_func get_asfreq_func(int fromFreq, int toFreq);
-void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info);
-
-int pyear(npy_int64 ordinal, int freq);
-int pqyear(npy_int64 ordinal, int freq);
-int pquarter(npy_int64 ordinal, int freq);
-int pmonth(npy_int64 ordinal, int freq);
-int pday(npy_int64 ordinal, int freq);
-int pweekday(npy_int64 ordinal, int freq);
-int pday_of_week(npy_int64 ordinal, int freq);
-int pday_of_year(npy_int64 ordinal, int freq);
-int pweek(npy_int64 ordinal, int freq);
-int phour(npy_int64 ordinal, int freq);
-int pminute(npy_int64 ordinal, int freq);
-int psecond(npy_int64 ordinal, int freq);
+i8 asfreq(i8 period_ordinal, i8 freq1, i8 freq2, const char* relation);
+
+i8 get_period_ordinal(i8 year, i8 month, i8 day, i8 hour, i8 minute, i8 second,
+ i8 microsecond, i8 freq);
+
+i8 get_python_ordinal(i8 period_ordinal, i8 freq);
+
+i8 get_date_info(i8 ordinal, i8 freq, struct date_info *dinfo);
+freq_conv_func get_asfreq_func(i8 fromFreq, i8 toFreq);
+void get_asfreq_info(i8 fromFreq, i8 toFreq, asfreq_info *af_info);
+
+i8 pyear(i8 ordinal, i8 freq);
+i8 pqyear(i8 ordinal, i8 freq);
+i8 pquarter(i8 ordinal, i8 freq);
+i8 pmonth(i8 ordinal, i8 freq);
+i8 pday(i8 ordinal, i8 freq);
+i8 pweekday(i8 ordinal, i8 freq);
+i8 pday_of_week(i8 ordinal, i8 freq);
+i8 pday_of_year(i8 ordinal, i8 freq);
+i8 pweek(i8 ordinal, i8 freq);
+i8 phour(i8 ordinal, i8 freq);
+i8 pminute(i8 ordinal, i8 freq);
+i8 psecond(i8 ordinal, i8 freq);
+i8 pmicrosecond(i8 ordinal, i8 freq);
+i8 pnanosecond(i8 ordinal, i8 freq);
+i8 ppicosecond(i8 ordinal, i8 freq);
+i8 pfemtosecond(i8 ordinal, i8 freq);
+i8 pattosecond(i8 ordinal, i8 freq);
-double getAbsTime(int freq, npy_int64 dailyDate, npy_int64 originalDate);
char *c_strftime(struct date_info *dinfo, char *fmt);
-int get_yq(npy_int64 ordinal, int freq, int *quarter, int *year);
+i8 get_yq(i8 ordinal, i8 freq, i8 *quarter, i8 *year);
#endif
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index bd68a1fcef08d..1494437d66b8e 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -28,6 +28,7 @@ def register():
units.registry[pydt.date] = DatetimeConverter()
units.registry[pydt.time] = TimeConverter()
+
def _to_ordinalf(tm):
tot_sec = (tm.hour * 3600 + tm.minute * 60 + tm.second +
float(tm.microsecond / 1e6))
@@ -53,10 +54,13 @@ def convert(value, unit, axis):
if (isinstance(value, valid_types) or com.is_integer(value) or
com.is_float(value)):
return time2num(value)
+
if isinstance(value, Index):
return value.map(time2num)
+
if isinstance(value, (list, tuple, np.ndarray)):
return [time2num(x) for x in value]
+
return value
@staticmethod
@@ -204,12 +208,12 @@ def __init__(self, locator, tz=None, defaultfmt='%Y-%m-%d'):
if self._tz is dates.UTC:
self._tz._utcoffset = self._tz.utcoffset(None)
self.scaled = {
- 365.0: '%Y',
- 30.: '%b %Y',
- 1.0: '%b %d %Y',
- 1. / 24.: '%H:%M:%S',
- 1. / 24. / 3600. / 1000.: '%H:%M:%S.%f'
- }
+ 365.0: '%Y',
+ 30.: '%b %Y',
+ 1.0: '%b %d %Y',
+ 1. / 24.: '%H:%M:%S',
+ 1. / 24. / 3600. / 1000.: '%H:%M:%S.%f'
+ }
def _get_fmt(self, x):
@@ -282,19 +286,19 @@ def __call__(self):
if dmin > dmax:
dmax, dmin = dmin, dmax
- delta = relativedelta(dmax, dmin)
+ # delta = relativedelta(dmax, dmin)
# We need to cap at the endpoints of valid datetime
- try:
- start = dmin - delta
- except ValueError:
- start = _from_ordinal(1.0)
+ # try:
+ # start = dmin - delta
+ # except ValueError:
+ # start = _from_ordinal(1.0)
- try:
- stop = dmax + delta
- except ValueError:
- # The magic number!
- stop = _from_ordinal(3652059.9999999)
+ # try:
+ # stop = dmax + delta
+ # except ValueError:
+ # # The magic number!
+ # stop = _from_ordinal(3652059.9999999)
nmax, nmin = dates.date2num((dmax, dmin))
@@ -314,7 +318,7 @@ def __call__(self):
raise RuntimeError(('MillisecondLocator estimated to generate %d '
'ticks from %s to %s: exceeds Locator.MAXTICKS'
'* 2 (%d) ') %
- (estimate, dmin, dmax, self.MAXTICKS * 2))
+ (estimate, dmin, dmax, self.MAXTICKS * 2))
freq = '%dL' % self._get_interval()
tz = self.tz.tzname(None)
@@ -326,7 +330,7 @@ def __call__(self):
if len(all_dates) > 0:
locs = self.raise_if_exceeds(dates.date2num(all_dates))
return locs
- except Exception, e: #pragma: no cover
+ except Exception: # pragma: no cover
pass
lims = dates.date2num([dmin, dmax])
@@ -343,19 +347,19 @@ def autoscale(self):
if dmin > dmax:
dmax, dmin = dmin, dmax
- delta = relativedelta(dmax, dmin)
+ # delta = relativedelta(dmax, dmin)
# We need to cap at the endpoints of valid datetime
- try:
- start = dmin - delta
- except ValueError:
- start = _from_ordinal(1.0)
+ # try:
+ # start = dmin - delta
+ # except ValueError:
+ # start = _from_ordinal(1.0)
- try:
- stop = dmax + delta
- except ValueError:
- # The magic number!
- stop = _from_ordinal(3652059.9999999)
+ # try:
+ # stop = dmax + delta
+ # except ValueError:
+ # # The magic number!
+ # stop = _from_ordinal(3652059.9999999)
dmin, dmax = self.datalim_to_dt()
@@ -450,7 +454,9 @@ def _daily_finder(vmin, vmax, freq):
periodsperday = -1
if freq >= FreqGroup.FR_HR:
- if freq == FreqGroup.FR_SEC:
+ if freq == FreqGroup.FR_USEC:
+ periodsperday = 24 * 60 * 60 * 1000000
+ elif freq == FreqGroup.FR_SEC:
periodsperday = 24 * 60 * 60
elif freq == FreqGroup.FR_MIN:
periodsperday = 24 * 60
@@ -493,11 +499,8 @@ def _daily_finder(vmin, vmax, freq):
info_fmt = info['fmt']
def first_label(label_flags):
- if (label_flags[0] == 0) and (label_flags.size > 1) and \
- ((vmin_orig % 1) > 0.0):
- return label_flags[1]
- else:
- return label_flags[0]
+ conds = label_flags[0] == 0, label_flags.size > 1, vmin_orig % 1 > 0.0
+ return label_flags[int(all(conds))]
# Case 1. Less than a month
if span <= periodspermonth:
@@ -543,22 +546,38 @@ def _second_finder(label_interval):
info_fmt[day_start] = '%H:%M:%S\n%d-%b'
info_fmt[year_start] = '%H:%M:%S\n%d-%b\n%Y'
- if span < periodsperday / 12000.0: _second_finder(1)
- elif span < periodsperday / 6000.0: _second_finder(2)
- elif span < periodsperday / 2400.0: _second_finder(5)
- elif span < periodsperday / 1200.0: _second_finder(10)
- elif span < periodsperday / 800.0: _second_finder(15)
- elif span < periodsperday / 400.0: _second_finder(30)
- elif span < periodsperday / 150.0: _minute_finder(1)
- elif span < periodsperday / 70.0: _minute_finder(2)
- elif span < periodsperday / 24.0: _minute_finder(5)
- elif span < periodsperday / 12.0: _minute_finder(15)
- elif span < periodsperday / 6.0: _minute_finder(30)
- elif span < periodsperday / 2.5: _hour_finder(1, False)
- elif span < periodsperday / 1.5: _hour_finder(2, False)
- elif span < periodsperday * 1.25: _hour_finder(3, False)
- elif span < periodsperday * 2.5: _hour_finder(6, True)
- elif span < periodsperday * 4: _hour_finder(12, True)
+ if span < periodsperday / 12000.0:
+ _second_finder(1)
+ elif span < periodsperday / 6000.0:
+ _second_finder(2)
+ elif span < periodsperday / 2400.0:
+ _second_finder(5)
+ elif span < periodsperday / 1200.0:
+ _second_finder(10)
+ elif span < periodsperday / 800.0:
+ _second_finder(15)
+ elif span < periodsperday / 400.0:
+ _second_finder(30)
+ elif span < periodsperday / 150.0:
+ _minute_finder(1)
+ elif span < periodsperday / 70.0:
+ _minute_finder(2)
+ elif span < periodsperday / 24.0:
+ _minute_finder(5)
+ elif span < periodsperday / 12.0:
+ _minute_finder(15)
+ elif span < periodsperday / 6.0:
+ _minute_finder(30)
+ elif span < periodsperday / 2.5:
+ _hour_finder(1, False)
+ elif span < periodsperday / 1.5:
+ _hour_finder(2, False)
+ elif span < periodsperday * 1.25:
+ _hour_finder(3, False)
+ elif span < periodsperday * 2.5:
+ _hour_finder(6, True)
+ elif span < periodsperday * 4:
+ _hour_finder(12, True)
else:
info_maj[month_start] = True
info_min[day_start] = True
@@ -567,6 +586,7 @@ def _second_finder(label_interval):
info_fmt[day_start] = '%d'
info_fmt[month_start] = '%d\n%b'
info_fmt[year_start] = '%d\n%b\n%Y'
+
if not has_level_label(year_start, vmin_orig):
if not has_level_label(month_start, vmin_orig):
info_fmt[first_label(day_start)] = '%d\n%b\n%Y'
@@ -881,6 +901,7 @@ def autoscale(self):
vmax += 1
return nonsingular(vmin, vmax)
+
#####-------------------------------------------------------------------------
#---- --- Formatter ---
#####-------------------------------------------------------------------------
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 299e93ddfe74d..16a84fbfbe8cd 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -22,6 +22,8 @@ class FreqGroup(object):
FR_HR = 7000
FR_MIN = 8000
FR_SEC = 9000
+ FR_USEC = 10000
+
class Resolution(object):
@@ -33,20 +35,24 @@ class Resolution(object):
@classmethod
def get_str(cls, reso):
- return {RESO_US : 'microsecond',
- RESO_SEC : 'second',
- RESO_MIN : 'minute',
- RESO_HR : 'hour',
- RESO_DAY : 'day'}.get(reso, 'day')
+ return {cls.RESO_US: 'microsecond',
+ cls.RESO_SEC: 'second',
+ cls.RESO_MIN: 'minute',
+ cls.RESO_HR: 'hour',
+ cls.RESO_DAY: 'day'}.get(reso, 'day')
+
def get_reso_string(reso):
return Resolution.get_str(reso)
+
def get_to_timestamp_base(base):
if base <= FreqGroup.FR_WK:
return FreqGroup.FR_DAY
+
if FreqGroup.FR_HR <= base <= FreqGroup.FR_SEC:
return FreqGroup.FR_SEC
+
return base
@@ -54,6 +60,7 @@ def get_freq_group(freq):
if isinstance(freq, basestring):
base, mult = get_freq_code(freq)
freq = base
+
return (freq // 1000) * 1000
@@ -61,24 +68,25 @@ def get_freq(freq):
if isinstance(freq, basestring):
base, mult = get_freq_code(freq)
freq = base
+
return freq
def get_freq_code(freqstr):
"""
-
Parameters
----------
+ freqstr : str
Returns
-------
+ code, stride
"""
if isinstance(freqstr, DateOffset):
freqstr = (get_offset_name(freqstr), freqstr.n)
if isinstance(freqstr, tuple):
- if (com.is_integer(freqstr[0]) and
- com.is_integer(freqstr[1])):
+ if (com.is_integer(freqstr[0]) and com.is_integer(freqstr[1])):
#e.g., freqstr = (2000, 1)
return freqstr
else:
@@ -282,7 +290,8 @@ def _get_freq_str(base, mult=1):
'Q': 'Q',
'A': 'A',
'W': 'W',
- 'M': 'M'
+ 'M': 'M',
+ 'U': 'U',
}
need_suffix = ['QS', 'BQ', 'BQS', 'AS', 'BA', 'BAS']
@@ -595,6 +604,7 @@ def get_standard_freq(freq):
"H": 7000, # Hourly
"T": 8000, # Minutely
"S": 9000, # Secondly
+ "U": 10000, # Microsecondly
}
_reverse_period_code_map = {}
@@ -622,6 +632,7 @@ def _period_alias_dictionary():
H_aliases = ["H", "HR", "HOUR", "HRLY", "HOURLY"]
T_aliases = ["T", "MIN", "MINUTE", "MINUTELY"]
S_aliases = ["S", "SEC", "SECOND", "SECONDLY"]
+ U_aliases = ["U", "USEC", "MICROSECOND", "MICROSECONDLY"]
for k in M_aliases:
alias_dict[k] = 'M'
@@ -641,6 +652,9 @@ def _period_alias_dictionary():
for k in S_aliases:
alias_dict[k] = 'S'
+ for k in U_aliases:
+ alias_dict[k] = 'U'
+
A_prefixes = ["A", "Y", "ANN", "ANNUAL", "ANNUALLY", "YR", "YEAR",
"YEARLY"]
@@ -708,6 +722,7 @@ def _period_alias_dictionary():
"hour": "H",
"minute": "T",
"second": "S",
+ "microsecond": "U",
}
@@ -999,23 +1014,25 @@ def is_subperiod(source, target):
if _is_quarterly(source):
return _quarter_months_conform(_get_rule_month(source),
_get_rule_month(target))
- return source in ['D', 'B', 'M', 'H', 'T', 'S']
+ return source in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif _is_quarterly(target):
- return source in ['D', 'B', 'M', 'H', 'T', 'S']
+ return source in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif target == 'M':
- return source in ['D', 'B', 'H', 'T', 'S']
+ return source in ['D', 'B', 'H', 'T', 'S', 'U']
elif _is_weekly(target):
- return source in [target, 'D', 'B', 'H', 'T', 'S']
+ return source in [target, 'D', 'B', 'H', 'T', 'S', 'U']
elif target == 'B':
- return source in ['B', 'H', 'T', 'S']
+ return source in ['B', 'H', 'T', 'S', 'U']
elif target == 'D':
- return source in ['D', 'H', 'T', 'S']
+ return source in ['D', 'H', 'T', 'S', 'U']
elif target == 'H':
- return source in ['H', 'T', 'S']
+ return source in ['H', 'T', 'S', 'U']
elif target == 'T':
- return source in ['T', 'S']
+ return source in ['T', 'S', 'U']
elif target == 'S':
- return source in ['S']
+ return source in ['S', 'U']
+ elif target == 'U':
+ return source == target
def is_superperiod(source, target):
@@ -1050,23 +1067,25 @@ def is_superperiod(source, target):
smonth = _get_rule_month(source)
tmonth = _get_rule_month(target)
return _quarter_months_conform(smonth, tmonth)
- return target in ['D', 'B', 'M', 'H', 'T', 'S']
+ return target in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif _is_quarterly(source):
- return target in ['D', 'B', 'M', 'H', 'T', 'S']
+ return target in ['D', 'B', 'M', 'H', 'T', 'S', 'U']
elif source == 'M':
- return target in ['D', 'B', 'H', 'T', 'S']
+ return target in ['D', 'B', 'H', 'T', 'S', 'U']
elif _is_weekly(source):
- return target in [source, 'D', 'B', 'H', 'T', 'S']
+ return target in [source, 'D', 'B', 'H', 'T', 'S', 'U']
elif source == 'B':
- return target in ['D', 'B', 'H', 'T', 'S']
+ return target in ['D', 'B', 'H', 'T', 'S', 'U']
elif source == 'D':
- return target in ['D', 'B', 'H', 'T', 'S']
+ return target in ['D', 'B', 'H', 'T', 'S', 'U']
elif source == 'H':
- return target in ['H', 'T', 'S']
+ return target in ['H', 'T', 'S', 'U']
elif source == 'T':
- return target in ['T', 'S']
+ return target in ['T', 'S', 'U']
elif source == 'S':
- return target in ['S']
+ return target in ['S', 'U']
+ elif source == 'U':
+ return target == source
def _get_rule_month(source, default='DEC'):
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 07dadc66cbc33..b1042d4656293 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1303,7 +1303,8 @@ def equals(self, other):
if self.tz is not None:
if other.tz is None:
return False
- same_zone = tslib.get_timezone(self.tz) == tslib.get_timezone(other.tz)
+ same_zone = (tslib.get_timezone(self.tz) ==
+ tslib.get_timezone(other.tz))
else:
if other.tz is not None:
return False
@@ -1633,6 +1634,7 @@ def _time_to_micros(time):
seconds = time.hour * 60 * 60 + 60 * time.minute + time.second
return 1000000 * seconds + time.microsecond
+
def _process_concat_data(to_concat, name):
klass = Index
kwargs = {}
@@ -1672,7 +1674,7 @@ def _process_concat_data(to_concat, name):
to_concat = [x.values for x in to_concat]
klass = DatetimeIndex
- kwargs = {'tz' : tz}
+ kwargs = {'tz': tz}
concat = com._concat_compat
else:
for i, x in enumerate(to_concat):
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index c340ac261592f..f2cf1e0d899ae 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -586,7 +586,8 @@ def apply(self, other):
else:
months = self.n + 1
- return self.getOffsetOfMonth(other + relativedelta(months=months, day=1))
+ return self.getOffsetOfMonth(other + relativedelta(months=months,
+ day=1))
def getOffsetOfMonth(self, dt):
w = Week(weekday=self.weekday)
@@ -1073,6 +1074,7 @@ def rule_code(self):
def isAnchored(self):
return False
+
def _delta_to_tick(delta):
if delta.microseconds == 0:
if delta.seconds == 0:
@@ -1107,25 +1109,31 @@ class Day(Tick, CacheableOffset):
_inc = timedelta(1)
_rule_base = 'D'
+
class Hour(Tick):
_inc = timedelta(0, 3600)
_rule_base = 'H'
+
class Minute(Tick):
_inc = timedelta(0, 60)
_rule_base = 'T'
+
class Second(Tick):
_inc = timedelta(0, 1)
_rule_base = 'S'
+
class Milli(Tick):
_rule_base = 'L'
+
class Micro(Tick):
_inc = timedelta(microseconds=1)
_rule_base = 'U'
+
class Nano(Tick):
_inc = 1
_rule_base = 'N'
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 5be006ea6e200..efd5eb5719404 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -1,12 +1,10 @@
# pylint: disable=E1101,E1103,W0232
-import operator
from datetime import datetime, date
import numpy as np
-import pandas.tseries.offsets as offsets
-from pandas.tseries.frequencies import (get_freq_code as _gfc,
- _month_numbers, FreqGroup)
+from pandas.tseries.frequencies import (get_freq_code as _gfc, _month_numbers,
+ FreqGroup)
from pandas.tseries.index import DatetimeIndex, Int64Index, Index
from pandas.tseries.tools import parse_time_string
import pandas.tseries.frequencies as _freq_mod
@@ -26,7 +24,9 @@ def _period_field_accessor(name, alias):
def f(self):
base, mult = _gfc(self.freq)
return tslib.get_period_field(alias, self.ordinal, base)
+
f.__name__ = name
+
return property(f)
@@ -34,17 +34,19 @@ def _field_accessor(name, alias):
def f(self):
base, mult = _gfc(self.freq)
return tslib.get_period_field_arr(alias, self.values, base)
+
f.__name__ = name
+
return property(f)
class Period(object):
- __slots__ = ['freq', 'ordinal']
+ __slots__ = 'freq', 'ordinal'
def __init__(self, value=None, freq=None, ordinal=None,
year=None, month=1, quarter=None, day=1,
- hour=0, minute=0, second=0):
+ hour=0, minute=0, second=0, microsecond=0):
"""
Represents an period of time
@@ -61,6 +63,7 @@ def __init__(self, value=None, freq=None, ordinal=None,
hour : int, default 0
minute : int, default 0
second : int, default 0
+ microsecond : int, default 0
"""
# freq points to a tuple (base, mult); base is one of the defined
# periods such as A, Q, etc. Every five minutes would be, e.g.,
@@ -72,8 +75,8 @@ def __init__(self, value=None, freq=None, ordinal=None,
self.ordinal = None
if ordinal is not None and value is not None:
- raise ValueError(("Only value or ordinal but not both should be "
- "given but not both"))
+ raise ValueError("Only value or ordinal but not both should be "
+ "given but not both")
elif ordinal is not None:
if not com.is_integer(ordinal):
raise ValueError("Ordinal must be an integer")
@@ -86,7 +89,8 @@ def __init__(self, value=None, freq=None, ordinal=None,
raise ValueError("If value is None, freq cannot be None")
self.ordinal = _ordinal_from_fields(year, month, quarter, day,
- hour, minute, second, freq)
+ hour, minute, second,
+ microsecond, freq)
elif isinstance(value, Period):
other = value
@@ -112,8 +116,8 @@ def __init__(self, value=None, freq=None, ordinal=None,
if freq is None:
raise ValueError('Must supply freq for datetime value')
else:
- msg = "Value must be Period, string, integer, or datetime"
- raise ValueError(msg)
+ raise ValueError("Value must be Period, string, integer, or "
+ "datetime")
base, mult = _gfc(freq)
if mult != 1:
@@ -122,7 +126,7 @@ def __init__(self, value=None, freq=None, ordinal=None,
if self.ordinal is None:
self.ordinal = tslib.period_ordinal(dt.year, dt.month, dt.day,
dt.hour, dt.minute, dt.second,
- base)
+ dt.microsecond, base)
self.freq = _freq_mod._get_freq_str(base)
@@ -168,14 +172,15 @@ def asfreq(self, freq, how='E'):
resampled : Period
"""
how = _validate_end_alias(how)
- base1, mult1 = _gfc(self.freq)
+ base1, _ = _gfc(self.freq)
base2, mult2 = _gfc(freq)
if mult2 != 1:
raise ValueError('Only mult == 1 supported')
- end = how == 'E'
- new_ordinal = tslib.period_asfreq(self.ordinal, base1, base2, end)
+ hows = {'S': 'S', 'E': 'E', 'START': 'S', 'END': 'E'}
+ new_ordinal = tslib.period_asfreq(self.ordinal, base1, base2,
+ hows[how.upper()])
return Period(ordinal=new_ordinal, freq=base2)
@@ -209,10 +214,10 @@ def to_timestamp(self, freq=None, how='start'):
how = _validate_end_alias(how)
if freq is None:
- base, mult = _gfc(self.freq)
+ base, _ = _gfc(self.freq)
freq = _freq_mod.get_to_timestamp_base(base)
- base, mult = _gfc(freq)
+ base, _ = _gfc(freq)
val = self.asfreq(freq, how)
dt64 = tslib.period_ordinal_to_dt64(val.ordinal, base)
@@ -224,6 +229,7 @@ def to_timestamp(self, freq=None, how='start'):
hour = _period_field_accessor('hour', 5)
minute = _period_field_accessor('minute', 6)
second = _period_field_accessor('second', 7)
+ microsecond = _period_field_accessor('microsecond', 11)
weekofyear = _period_field_accessor('week', 8)
week = weekofyear
dayofweek = _period_field_accessor('dayofweek', 10)
@@ -305,6 +311,26 @@ def strftime(self, fmt):
+-----------+--------------------------------+-------+
| ``%S`` | Second as a decimal number | \(4) |
| | [00,61]. | |
+ +-----------|--------------------------------|-------+
+ | ``%u`` | Microsecond as a decimal | |
+ | | number | |
+ | | [000000, 999999] | |
+ +-----------+--------------------------------+-------+
+ | ``%u`` | Microsecond as a decimal number| |
+ | | [00,999999]. | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ +-----------+--------------------------------+-------+
+ | ``%u`` | Microsecond as a decimal number| |
+ | | [00,999999]. | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+-----------+--------------------------------+-------+
| ``%U`` | Week number of the year | \(5) |
| | (Sunday as the first day of | |
@@ -407,6 +433,8 @@ def _get_date_and_freq(value, freq):
freq = 'T'
elif reso == 'second':
freq = 'S'
+ elif reso == 'microsecond':
+ freq = 'U'
else:
raise ValueError("Invalid frequency or could not infer: %s" % reso)
@@ -428,6 +456,7 @@ def dt64arr_to_periodarr(data, freq, tz):
base, mult = _gfc(freq)
return tslib.dt64arr_to_periodarr(data.view('i8'), base, tz)
+
# --- Period index sketch
def _period_index_cmp(opname):
"""
@@ -494,6 +523,7 @@ class PeriodIndex(Int64Index):
hour : int or array, default None
minute : int or array, default None
second : int or array, default None
+ microsecond : int or array, default None
tz : object, default None
Timezone for converting datetime64 data to Periods
@@ -516,7 +546,7 @@ def __new__(cls, data=None, ordinal=None,
freq=None, start=None, end=None, periods=None,
copy=False, name=None,
year=None, month=None, quarter=None, day=None,
- hour=None, minute=None, second=None,
+ hour=None, minute=None, second=None, microsecond=None,
tz=None):
freq = _freq_mod.get_standard_freq(freq)
@@ -532,7 +562,8 @@ def __new__(cls, data=None, ordinal=None,
if ordinal is not None:
data = np.asarray(ordinal, dtype=np.int64)
else:
- fields = [year, month, quarter, day, hour, minute, second]
+ fields = [year, month, quarter, day, hour, minute, second,
+ microsecond]
data, freq = cls._generate_range(start, end, periods,
freq, fields)
else:
@@ -554,10 +585,11 @@ def _generate_range(cls, start, end, periods, freq, fields):
'or endpoints, but not both')
subarr, freq = _get_ordinal_range(start, end, periods, freq)
elif field_count > 0:
- y, mth, q, d, h, minute, s = fields
+ y, mth, q, d, h, minute, s, us = fields
subarr, freq = _range_from_fields(year=y, month=mth, quarter=q,
day=d, hour=h, minute=minute,
- second=s, freq=freq)
+ second=s, microsecond=us,
+ freq=freq)
else:
raise ValueError('Not enough parameters to construct '
'Period range')
@@ -602,7 +634,7 @@ def _from_arraylike(cls, data, freq, tz):
base1, _ = _gfc(data.freq)
base2, _ = _gfc(freq)
data = tslib.period_asfreq_arr(data.values, base1,
- base2, 1)
+ base2, 'E')
else:
if freq is None and len(data) > 0:
freq = getattr(data[0], 'freq', None)
@@ -641,9 +673,14 @@ def _box_values(self, values):
def asof_locs(self, where, mask):
"""
+ Parameters
+ ----------
where : array of timestamps
mask : array of booleans where data is not NA
+ Returns
+ -------
+ result : array_like
"""
where_idx = where
if isinstance(where_idx, DatetimeIndex):
@@ -716,14 +753,15 @@ def asfreq(self, freq=None, how='E'):
freq = _freq_mod.get_standard_freq(freq)
- base1, mult1 = _gfc(self.freq)
+ base1, _ = _gfc(self.freq)
base2, mult2 = _gfc(freq)
if mult2 != 1:
raise ValueError('Only mult == 1 supported')
- end = how == 'E'
- new_data = tslib.period_asfreq_arr(self.values, base1, base2, end)
+ hows = {'S': 'S', 'E': 'E', 'START': 'S', 'END': 'E'}
+ new_data = tslib.period_asfreq_arr(self.values, base1, base2,
+ hows[how.upper()])
result = new_data.view(PeriodIndex)
result.name = self.name
@@ -739,6 +777,7 @@ def to_datetime(self, dayfirst=False):
hour = _field_accessor('hour', 5)
minute = _field_accessor('minute', 6)
second = _field_accessor('second', 7)
+ microsecond = _field_accessor('microsecond', 11)
weekofyear = _field_accessor('week', 8)
week = weekofyear
dayofweek = _field_accessor('dayofweek', 10)
@@ -784,10 +823,10 @@ def to_timestamp(self, freq=None, how='start'):
how = _validate_end_alias(how)
if freq is None:
- base, mult = _gfc(self.freq)
+ base, _ = _gfc(self.freq)
freq = _freq_mod.get_to_timestamp_base(base)
- base, mult = _gfc(freq)
+ base, _ = _gfc(freq)
new_data = self.asfreq(freq, how)
new_data = tslib.periodarr_to_dt64arr(new_data.values, base)
@@ -833,7 +872,7 @@ def get_value(self, series, key):
return super(PeriodIndex, self).get_value(series, key)
except (KeyError, IndexError):
try:
- asdt, parsed, reso = parse_time_string(key, self.freq)
+ asdt, _, reso = parse_time_string(key, self.freq)
grp = _freq_mod._infer_period_group(reso)
freqn = _freq_mod._period_group(self.freq)
@@ -874,7 +913,7 @@ def get_loc(self, key):
return self._engine.get_loc(key)
except KeyError:
try:
- asdt, parsed, reso = parse_time_string(key, self.freq)
+ asdt, _, _ = parse_time_string(key, self.freq)
key = asdt
except TypeError:
pass
@@ -1106,16 +1145,23 @@ def _get_ordinal_range(start, end, periods, freq):
return data, freq
-def _range_from_fields(year=None, month=None, quarter=None, day=None,
- hour=None, minute=None, second=None, freq=None):
+def _range_from_fields(year=None, month=None, quarter=None, day=None, hour=None,
+ minute=None, second=None, microsecond=None, freq=None):
+
+ if day is None:
+ day = 1
+
if hour is None:
hour = 0
+
if minute is None:
minute = 0
+
if second is None:
second = 0
- if day is None:
- day = 1
+
+ if microsecond is None:
+ microsecond = 0
ordinals = []
@@ -1132,17 +1178,18 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
year, quarter = _make_field_arrays(year, quarter)
for y, q in zip(year, quarter):
y, m = _quarter_to_myear(y, q, freq)
- val = tslib.period_ordinal(y, m, 1, 1, 1, 1, base)
+ val = tslib.period_ordinal(y, m, 1, 1, 1, 1, 1, base)
ordinals.append(val)
else:
base, mult = _gfc(freq)
if mult != 1:
raise ValueError('Only mult == 1 supported')
- arrays = _make_field_arrays(year, month, day, hour, minute, second)
- for y, mth, d, h, mn, s in zip(*arrays):
- ordinals.append(tslib.period_ordinal(y, mth, d, h, mn, s, base))
-
+ arrays = _make_field_arrays(year, month, day, hour, minute, second,
+ microsecond)
+ for y, mth, d, h, mn, s, us in zip(*arrays):
+ ordinals.append(tslib.period_ordinal(y, mth, d, h, mn, s, us,
+ base))
return np.array(ordinals, dtype=np.int64), freq
@@ -1162,7 +1209,7 @@ def _make_field_arrays(*fields):
def _ordinal_from_fields(year, month, quarter, day, hour, minute,
- second, freq):
+ second, microsecond, freq):
base, mult = _gfc(freq)
if mult != 1:
raise ValueError('Only mult == 1 supported')
@@ -1170,7 +1217,8 @@ def _ordinal_from_fields(year, month, quarter, day, hour, minute,
if quarter is not None:
year, month = _quarter_to_myear(year, quarter, freq)
- return tslib.period_ordinal(year, month, day, hour, minute, second, base)
+ return tslib.period_ordinal(year, month, day, hour, minute, second,
+ microsecond, base)
def _quarter_to_myear(year, quarter, freq):
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index 5fe01161c996c..54f8825200c6f 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -22,9 +22,9 @@
from pandas.tseries.converter import (PeriodConverter, TimeSeries_DateLocator,
TimeSeries_DateFormatter)
+
#----------------------------------------------------------------------
# Plotting functions and monkey patches
-
def tsplot(series, plotf, **kwargs):
"""
Plots a Series on the given Matplotlib axes or the current axes
@@ -74,7 +74,7 @@ def tsplot(series, plotf, **kwargs):
args.append(style)
lines = plotf(ax, *args, **kwargs)
- label = kwargs.get('label', None)
+ # label = kwargs.get('label', None)
# set date formatter, locators and rescale limits
format_dateaxis(ax, ax.freq)
@@ -99,7 +99,7 @@ def _maybe_resample(series, ax, freq, plotf, kwargs):
elif frequencies.is_subperiod(freq, ax_freq) or _is_sub(freq, ax_freq):
_upsample_others(ax, freq, plotf, kwargs)
ax_freq = freq
- else: #pragma: no cover
+ else: # pragma: no cover
raise ValueError('Incompatible frequency conversion')
return freq, ax_freq, series
@@ -146,6 +146,7 @@ def _upsample_others(ax, freq, plotf, kwargs):
title = None
ax.legend(lines, labels, loc='best', title=title)
+
def _replot_ax(ax, freq, plotf, kwargs):
data = getattr(ax, '_plot_data', None)
ax._plot_data = []
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 3a96d59d3dd1c..fd078dcdb7da9 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -1,7 +1,6 @@
-from datetime import datetime, time, timedelta
-import sys
-import os
+from datetime import datetime, timedelta
import unittest
+from unittest import TestCase
import nose
@@ -9,12 +8,12 @@
from pandas import Index, DatetimeIndex, date_range, period_range
-from pandas.tseries.frequencies import to_offset, infer_freq
+from pandas.tseries.frequencies import (to_offset, infer_freq, Resolution,
+ get_reso_string)
from pandas.tseries.tools import to_datetime
import pandas.tseries.frequencies as fmod
import pandas.tseries.offsets as offsets
-import pandas.lib as lib
def test_to_offset_multiple():
freqstr = '2h30min'
@@ -53,12 +52,13 @@ def test_to_offset_multiple():
else:
assert(False)
+
def test_to_offset_negative():
freqstr = '-1S'
result = to_offset(freqstr)
assert(result.n == -1)
- freqstr='-5min10s'
+ freqstr = '-5min10s'
result = to_offset(freqstr)
assert(result.n == -310)
@@ -75,6 +75,7 @@ def test_anchored_shortcuts():
_dti = DatetimeIndex
+
class TestFrequencyInference(unittest.TestCase):
def test_raise_if_too_few(self):
@@ -197,7 +198,6 @@ def _check_generated_range(self, start, freq):
(inf_freq == 'Q-OCT' and
gen.freqstr in ('Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN')))
-
gen = date_range(start, periods=5, freq=freq)
index = _dti(gen.values)
if not freq.startswith('Q-'):
@@ -243,6 +243,7 @@ def test_non_datetimeindex(self):
MONTHS = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP',
'OCT', 'NOV', 'DEC']
+
def test_is_superperiod_subperiod():
assert(fmod.is_superperiod(offsets.YearEnd(), offsets.MonthEnd()))
assert(fmod.is_subperiod(offsets.MonthEnd(), offsets.YearEnd()))
@@ -250,6 +251,29 @@ def test_is_superperiod_subperiod():
assert(fmod.is_superperiod(offsets.Hour(), offsets.Minute()))
assert(fmod.is_subperiod(offsets.Minute(), offsets.Hour()))
+
+class TestResolution(TestCase):
+ def test_resolution(self):
+ resos = 'microsecond', 'second', 'minute', 'hour', 'day'
+
+ for i, r in enumerate(resos):
+ s = Resolution.get_str(i)
+ self.assertEqual(s, r)
+
+ self.assertEqual(Resolution.get_str(None), 'day')
+
+ def test_get_reso_string(self):
+ resos = 'microsecond', 'second', 'minute', 'hour', 'day'
+
+ for i, r in enumerate(resos):
+ s = Resolution.get_str(i)
+ self.assertEqual(s, r)
+ self.assertEqual(get_reso_string(i), s)
+
+ self.assertEqual(Resolution.get_str(None), 'day')
+ self.assertEqual(get_reso_string(None), 'day')
+
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index e3b51033a098c..5119e94d6ea9d 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -22,12 +22,13 @@
import pandas.core.datetools as datetools
import pandas as pd
import numpy as np
-randn = np.random.randn
+from numpy.random import randn
from pandas import Series, TimeSeries, DataFrame
from pandas.util.testing import assert_series_equal, assert_almost_equal
import pandas.util.testing as tm
+
class TestPeriodProperties(TestCase):
"Test properties such as year, month, weekday, etc...."
#
@@ -92,7 +93,7 @@ def test_period_constructor(self):
i4 = Period('2005', freq='M')
i5 = Period('2005', freq='m')
- self.assert_(i1 != i4)
+ self.assertNotEqual(i1, i4)
self.assertEquals(i4, i5)
i1 = Period.now('Q')
@@ -182,9 +183,13 @@ def test_period_constructor(self):
i2 = Period(datetime(2007, 1, 1), freq='M')
self.assertEqual(i1, i2)
- self.assertRaises(ValueError, Period, ordinal=200701)
+ i1 = Period(date(2007, 1, 1), freq='U')
+ i2 = Period(datetime(2007, 1, 1), freq='U')
+ self.assertEqual(i1, i2)
- self.assertRaises(KeyError, Period, '2007-1-1', freq='U')
+ # errors
+ self.assertRaises(ValueError, Period, ordinal=200701)
+ self.assertRaises(KeyError, Period, '2007-1-1', freq='L')
def test_freq_str(self):
i1 = Period('1982', freq='Min')
@@ -199,8 +204,12 @@ def test_repr(self):
def test_strftime(self):
p = Period('2000-1-1 12:34:12', freq='S')
- self.assert_(p.strftime('%Y-%m-%d %H:%M:%S') ==
- '2000-01-01 12:34:12')
+ s = p.strftime('%Y-%m-%d %H:%M:%S')
+ self.assertEqual(s, '2000-01-01 12:34:12')
+
+ p = Period('2001-1-1 12:34:12', freq='U')
+ s = p.strftime('%Y-%m-%d %H:%M:%S.%u')
+ self.assertEqual(s, '2001-01-01 12:34:12.000000')
def test_sub_delta(self):
left, right = Period('2011', freq='A'), Period('2007', freq='A')
@@ -222,13 +231,12 @@ def test_to_timestamp(self):
for a in aliases:
self.assertEquals(end_ts, p.to_timestamp('D', how=a))
- from_lst = ['A', 'Q', 'M', 'W', 'B',
- 'D', 'H', 'Min', 'S']
+ from_lst = 'A', 'Q', 'M', 'W', 'B', 'D', 'H', 'Min', 'S', 'U'
def _ex(p):
return Timestamp((p + 1).start_time.value - 1)
- for i, fcode in enumerate(from_lst):
+ for fcode in from_lst:
p = Period('1982', freq=fcode)
result = p.to_timestamp().to_period(fcode)
self.assertEquals(result, p)
@@ -253,22 +261,26 @@ def _ex(p):
expected = datetime(1985, 12, 31)
self.assertEquals(result, expected)
- expected = datetime(1985, 1, 1)
- result = p.to_timestamp('H', how='start')
- self.assertEquals(result, expected)
- result = p.to_timestamp('T', how='start')
- self.assertEquals(result, expected)
- result = p.to_timestamp('S', how='start')
+ result = p.to_timestamp('U', how='end')
+ expected = datetime(1985, 12, 31, 23, 59, 59, 999999)
self.assertEquals(result, expected)
+ expected = datetime(1985, 1, 1)
+
+ for freq in ('H', 'T', 'S', 'U'):
+ result = p.to_timestamp(freq, how='start')
+ self.assertEquals(result, expected)
+
self.assertRaises(ValueError, p.to_timestamp, '5t')
def test_start_time(self):
- freq_lst = ['A', 'Q', 'M', 'D', 'H', 'T', 'S']
+ freq_lst = 'A', 'Q', 'M', 'D', 'H', 'T', 'S', 'U'
xp = datetime(2012, 1, 1)
+
for f in freq_lst:
p = Period('2012', freq=f)
self.assertEquals(p.start_time, xp)
+
self.assertEquals(Period('2012', freq='B').start_time,
datetime(2011, 12, 30))
self.assertEquals(Period('2012', freq='W').start_time,
@@ -277,8 +289,8 @@ def test_start_time(self):
def test_end_time(self):
p = Period('2012', freq='A')
- def _ex(*args):
- return Timestamp(Timestamp(datetime(*args)).value - 1)
+ def _ex(*args, **kwargs):
+ return Timestamp(Timestamp(datetime(*args, **kwargs)).value - 1)
xp = _ex(2013, 1, 1)
self.assertEquals(xp, p.end_time)
@@ -299,13 +311,24 @@ def _ex(*args):
p = Period('2012', freq='H')
self.assertEquals(p.end_time, xp)
+ xp = _ex(2012, 1, 1, hour=0, minute=1)
+ p = Period('2012', freq='T')
+ self.assertEquals(p.end_time, xp)
+
+ xp = _ex(2012, 1, 1, hour=0, minute=0, second=1)
+ p = Period('2012', freq='S')
+ self.assertEquals(p.end_time, xp)
+
+ xp = _ex(2012, 1, 1, hour=0, minute=0, second=0, microsecond=1)
+ p = Period('2012', freq='U')
+ self.assertEquals(p.end_time, xp)
+
xp = _ex(2012, 1, 2)
self.assertEquals(Period('2012', freq='B').end_time, xp)
xp = _ex(2012, 1, 2)
self.assertEquals(Period('2012', freq='W').end_time, xp)
-
def test_properties_annually(self):
# Test properties on Periods with annually frequency.
a_date = Period(freq='A', year=2007)
@@ -322,7 +345,6 @@ def test_properties_quarterly(self):
assert_equal((qd + x).qyear, 2007)
assert_equal((qd + x).quarter, x + 1)
-
def test_properties_monthly(self):
# Test properties on Periods with daily frequency.
m_date = Period(freq='M', year=2007, month=1)
@@ -337,8 +359,7 @@ def test_properties_monthly(self):
assert_equal(m_ival_x.quarter, 3)
elif 10 <= x + 1 <= 12:
assert_equal(m_ival_x.quarter, 4)
- assert_equal(m_ival_x.month, x + 1)
-
+ assert_equal(m_ival_x.month, x + 1)
def test_properties_weekly(self):
# Test properties on Periods with daily frequency.
@@ -350,7 +371,6 @@ def test_properties_weekly(self):
assert_equal(w_date.week, 1)
assert_equal((w_date - 1).week, 52)
-
def test_properties_daily(self):
# Test properties on Periods with daily frequency.
b_date = Period(freq='B', year=2007, month=1, day=1)
@@ -371,7 +391,6 @@ def test_properties_daily(self):
assert_equal(d_date.weekday, 0)
assert_equal(d_date.dayofyear, 1)
-
def test_properties_hourly(self):
# Test properties on Periods with hourly frequency.
h_date = Period(freq='H', year=2007, month=1, day=1, hour=0)
@@ -385,11 +404,10 @@ def test_properties_hourly(self):
assert_equal(h_date.hour, 0)
#
-
def test_properties_minutely(self):
# Test properties on Periods with minutely frequency.
t_date = Period(freq='Min', year=2007, month=1, day=1, hour=0,
- minute=0)
+ minute=0)
#
assert_equal(t_date.quarter, 1)
assert_equal(t_date.month, 1)
@@ -399,11 +417,10 @@ def test_properties_minutely(self):
assert_equal(t_date.hour, 0)
assert_equal(t_date.minute, 0)
-
def test_properties_secondly(self):
# Test properties on Periods with secondly frequency.
s_date = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
#
assert_equal(s_date.year, 2007)
assert_equal(s_date.quarter, 1)
@@ -415,6 +432,21 @@ def test_properties_secondly(self):
assert_equal(s_date.minute, 0)
assert_equal(s_date.second, 0)
+ def test_properties_microsecondly(self):
+ u_date = Period(freq='U', year=2007, month=1, day=1, hour=0, minute=40,
+ second=5, microsecond=10)
+ #
+ assert_equal(u_date.year, 2007)
+ assert_equal(u_date.quarter, 1)
+ assert_equal(u_date.month, 1)
+ assert_equal(u_date.day, 1)
+ assert_equal(u_date.weekday, 0)
+ assert_equal(u_date.dayofyear, 1)
+ assert_equal(u_date.hour, 0)
+ assert_equal(u_date.minute, 40)
+ assert_equal(u_date.second, 5)
+ assert_equal(u_date.microsecond, 10)
+
def test_pnow(self):
dt = datetime.now()
@@ -442,27 +474,30 @@ def test_constructor_corner(self):
def test_constructor_infer_freq(self):
p = Period('2007-01-01')
- self.assert_(p.freq == 'D')
+ self.assertEqual(p.freq, 'D')
p = Period('2007-01-01 07')
- self.assert_(p.freq == 'H')
+ self.assertEqual(p.freq, 'H')
p = Period('2007-01-01 07:10')
- self.assert_(p.freq == 'T')
+ self.assertEqual(p.freq, 'T')
p = Period('2007-01-01 07:10:15')
- self.assert_(p.freq == 'S')
+ self.assertEqual(p.freq, 'S')
- self.assertRaises(ValueError, Period, '2007-01-01 07:10:15.123456')
+ p = Period('2007-01-01 07:10:15.123456')
+ self.assertEqual(p.freq, 'U')
def test_comparisons(self):
p = Period('2007-01-01')
self.assertEquals(p, p)
- self.assert_(not p == 1)
+ self.assertNotEqual(p, 1)
+
def noWrap(item):
return item
+
class TestFreqConversion(TestCase):
"Test frequency conversion of date objects"
@@ -493,17 +528,22 @@ def test_conv_annual(self):
ival_A_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_A_to_D_end = Period(freq='D', year=2007, month=12, day=31)
ival_A_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_A_to_H_end = Period(freq='H', year=2007, month=12, day=31,
- hour=23)
+ hour=23)
ival_A_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_A_to_T_end = Period(freq='Min', year=2007, month=12, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_A_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_A_to_S_end = Period(freq='S', year=2007, month=12, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
+ ival_A_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_A_to_U_end = Period(freq='U', year=2007, month=12, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_AJAN_to_D_end = Period(freq='D', year=2007, month=1, day=31)
ival_AJAN_to_D_start = Period(freq='D', year=2006, month=2, day=1)
@@ -530,6 +570,8 @@ def test_conv_annual(self):
assert_equal(ival_A.asfreq('T', 'E'), ival_A_to_T_end)
assert_equal(ival_A.asfreq('S', 'S'), ival_A_to_S_start)
assert_equal(ival_A.asfreq('S', 'E'), ival_A_to_S_end)
+ assert_equal(ival_A.asfreq('U', 'S'), ival_A_to_U_start)
+ assert_equal(ival_A.asfreq('U', 'E'), ival_A_to_U_end)
assert_equal(ival_AJAN.asfreq('D', 'S'), ival_AJAN_to_D_start)
assert_equal(ival_AJAN.asfreq('D', 'E'), ival_AJAN_to_D_end)
@@ -542,7 +584,6 @@ def test_conv_annual(self):
assert_equal(ival_A.asfreq('A'), ival_A)
-
def test_conv_quarterly(self):
# frequency conversion tests: from Quarterly Frequency
@@ -562,17 +603,22 @@ def test_conv_quarterly(self):
ival_Q_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_Q_to_D_end = Period(freq='D', year=2007, month=3, day=31)
ival_Q_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_Q_to_H_end = Period(freq='H', year=2007, month=3, day=31,
- hour=23)
+ hour=23)
ival_Q_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_Q_to_T_end = Period(freq='Min', year=2007, month=3, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_Q_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_Q_to_S_end = Period(freq='S', year=2007, month=3, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
+ ival_Q_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_Q_to_U_end = Period(freq='U', year=2007, month=3, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_QEJAN_to_D_start = Period(freq='D', year=2006, month=2, day=1)
ival_QEJAN_to_D_end = Period(freq='D', year=2006, month=4, day=30)
@@ -597,6 +643,8 @@ def test_conv_quarterly(self):
assert_equal(ival_Q.asfreq('Min', 'E'), ival_Q_to_T_end)
assert_equal(ival_Q.asfreq('S', 'S'), ival_Q_to_S_start)
assert_equal(ival_Q.asfreq('S', 'E'), ival_Q_to_S_end)
+ assert_equal(ival_Q.asfreq('U', 'S'), ival_Q_to_U_start)
+ assert_equal(ival_Q.asfreq('U', 'E'), ival_Q_to_U_end)
assert_equal(ival_QEJAN.asfreq('D', 'S'), ival_QEJAN_to_D_start)
assert_equal(ival_QEJAN.asfreq('D', 'E'), ival_QEJAN_to_D_end)
@@ -620,17 +668,22 @@ def test_conv_monthly(self):
ival_M_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_M_to_D_end = Period(freq='D', year=2007, month=1, day=31)
ival_M_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_M_to_H_end = Period(freq='H', year=2007, month=1, day=31,
- hour=23)
+ hour=23)
ival_M_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_M_to_T_end = Period(freq='Min', year=2007, month=1, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_M_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
+ ival_M_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_M_to_U_end = Period(freq='U', year=2007, month=1, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_M.asfreq('A'), ival_M_to_A)
assert_equal(ival_M_end_of_year.asfreq('A'), ival_M_to_A)
@@ -649,10 +702,11 @@ def test_conv_monthly(self):
assert_equal(ival_M.asfreq('Min', 'E'), ival_M_to_T_end)
assert_equal(ival_M.asfreq('S', 'S'), ival_M_to_S_start)
assert_equal(ival_M.asfreq('S', 'E'), ival_M_to_S_end)
+ assert_equal(ival_M.asfreq('U', 'S'), ival_M_to_U_start)
+ assert_equal(ival_M.asfreq('U', 'E'), ival_M_to_U_end)
assert_equal(ival_M.asfreq('M'), ival_M)
-
def test_conv_weekly(self):
# frequency conversion tests: from Weekly Frequency
@@ -695,10 +749,10 @@ def test_conv_weekly(self):
if Period(freq='D', year=2007, month=3, day=31).weekday == 6:
ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007,
- quarter=1)
+ quarter=1)
else:
ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007,
- quarter=2)
+ quarter=2)
if Period(freq='D', year=2007, month=1, day=31).weekday == 6:
ival_W_to_M_end_of_month = Period(freq='M', year=2007, month=1)
@@ -710,17 +764,22 @@ def test_conv_weekly(self):
ival_W_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_W_to_D_end = Period(freq='D', year=2007, month=1, day=7)
ival_W_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_W_to_H_end = Period(freq='H', year=2007, month=1, day=7,
- hour=23)
+ hour=23)
ival_W_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_W_to_T_end = Period(freq='Min', year=2007, month=1, day=7,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_W_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
+ ival_W_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_W_to_U_end = Period(freq='U', year=2007, month=1, day=7,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_W.asfreq('A'), ival_W_to_A)
assert_equal(ival_W_end_of_year.asfreq('A'),
@@ -760,8 +819,10 @@ def test_conv_weekly(self):
assert_equal(ival_W.asfreq('S', 'S'), ival_W_to_S_start)
assert_equal(ival_W.asfreq('S', 'E'), ival_W_to_S_end)
- assert_equal(ival_W.asfreq('WK'), ival_W)
+ assert_equal(ival_W.asfreq('U', 'S'), ival_W_to_U_start)
+ assert_equal(ival_W.asfreq('U', 'E'), ival_W_to_U_end)
+ assert_equal(ival_W.asfreq('WK'), ival_W)
def test_conv_business(self):
# frequency conversion tests: from Business Frequency"
@@ -778,17 +839,22 @@ def test_conv_business(self):
ival_B_to_W = Period(freq='WK', year=2007, month=1, day=7)
ival_B_to_D = Period(freq='D', year=2007, month=1, day=1)
ival_B_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_B_to_H_end = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_B_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_B_to_T_end = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_B_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
+ ival_B_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_B_to_U_end = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_B.asfreq('A'), ival_B_to_A)
assert_equal(ival_B_end_of_year.asfreq('A'), ival_B_to_A)
@@ -807,10 +873,11 @@ def test_conv_business(self):
assert_equal(ival_B.asfreq('Min', 'E'), ival_B_to_T_end)
assert_equal(ival_B.asfreq('S', 'S'), ival_B_to_S_start)
assert_equal(ival_B.asfreq('S', 'E'), ival_B_to_S_end)
+ assert_equal(ival_B.asfreq('U', 'S'), ival_B_to_U_start)
+ assert_equal(ival_B.asfreq('U', 'E'), ival_B_to_U_end)
assert_equal(ival_B.asfreq('B'), ival_B)
-
def test_conv_daily(self):
# frequency conversion tests: from Business Frequency"
@@ -842,17 +909,22 @@ def test_conv_daily(self):
ival_D_to_W = Period(freq='WK', year=2007, month=1, day=7)
ival_D_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_D_to_H_end = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_D_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_D_to_T_end = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_D_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
+ ival_D_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_D_to_U_end = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
assert_equal(ival_D.asfreq('A'), ival_D_to_A)
@@ -879,12 +951,17 @@ def test_conv_daily(self):
assert_equal(ival_D_sunday.asfreq('B', 'S'), ival_B_friday)
assert_equal(ival_D_sunday.asfreq('B', 'E'), ival_B_monday)
+ assert_equal(ival_D_monday.asfreq('B', 'S'), ival_B_monday)
+ assert_equal(ival_D_monday.asfreq('B', 'E'), ival_B_monday)
+
assert_equal(ival_D.asfreq('H', 'S'), ival_D_to_H_start)
assert_equal(ival_D.asfreq('H', 'E'), ival_D_to_H_end)
assert_equal(ival_D.asfreq('Min', 'S'), ival_D_to_T_start)
assert_equal(ival_D.asfreq('Min', 'E'), ival_D_to_T_end)
assert_equal(ival_D.asfreq('S', 'S'), ival_D_to_S_start)
assert_equal(ival_D.asfreq('S', 'E'), ival_D_to_S_end)
+ assert_equal(ival_D.asfreq('U', 'S'), ival_D_to_U_start)
+ assert_equal(ival_D.asfreq('U', 'E'), ival_D_to_U_end)
assert_equal(ival_D.asfreq('D'), ival_D)
@@ -895,15 +972,15 @@ def test_conv_hourly(self):
ival_H_end_of_year = Period(freq='H', year=2007, month=12, day=31,
hour=23)
ival_H_end_of_quarter = Period(freq='H', year=2007, month=3, day=31,
- hour=23)
+ hour=23)
ival_H_end_of_month = Period(freq='H', year=2007, month=1, day=31,
- hour=23)
+ hour=23)
ival_H_end_of_week = Period(freq='H', year=2007, month=1, day=7,
hour=23)
ival_H_end_of_day = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_H_end_of_bus = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_H_to_A = Period(freq='A', year=2007)
ival_H_to_Q = Period(freq='Q', year=2007, quarter=1)
@@ -913,13 +990,17 @@ def test_conv_hourly(self):
ival_H_to_B = Period(freq='B', year=2007, month=1, day=1)
ival_H_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_H_to_T_end = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=59)
+ hour=0, minute=59)
ival_H_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=59, second=59)
+ hour=0, minute=59, second=59)
+ ival_H_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_H_to_U_end = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=59, second=59, microsecond=999999)
assert_equal(ival_H.asfreq('A'), ival_H_to_A)
assert_equal(ival_H_end_of_year.asfreq('A'), ival_H_to_A)
@@ -936,28 +1017,32 @@ def test_conv_hourly(self):
assert_equal(ival_H.asfreq('Min', 'S'), ival_H_to_T_start)
assert_equal(ival_H.asfreq('Min', 'E'), ival_H_to_T_end)
+
assert_equal(ival_H.asfreq('S', 'S'), ival_H_to_S_start)
assert_equal(ival_H.asfreq('S', 'E'), ival_H_to_S_end)
+ assert_equal(ival_H.asfreq('U', 'S'), ival_H_to_U_start)
+ assert_equal(ival_H.asfreq('U', 'E'), ival_H_to_U_end)
+
assert_equal(ival_H.asfreq('H'), ival_H)
def test_conv_minutely(self):
# frequency conversion tests: from Minutely Frequency"
- ival_T = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ ival_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
+ minute=0)
ival_T_end_of_year = Period(freq='Min', year=2007, month=12, day=31,
hour=23, minute=59)
ival_T_end_of_quarter = Period(freq='Min', year=2007, month=3, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_month = Period(freq='Min', year=2007, month=1, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_week = Period(freq='Min', year=2007, month=1, day=7,
hour=23, minute=59)
ival_T_end_of_day = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_bus = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_hour = Period(freq='Min', year=2007, month=1, day=1,
hour=0, minute=59)
@@ -970,9 +1055,14 @@ def test_conv_minutely(self):
ival_T_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0)
ival_T_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=59)
+ hour=0, minute=0, second=59)
+ ival_T_to_U_start = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0, microsecond=0)
+ ival_T_to_U_end = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=59,
+ microsecond=999999)
assert_equal(ival_T.asfreq('A'), ival_T_to_A)
assert_equal(ival_T_end_of_year.asfreq('A'), ival_T_to_A)
@@ -992,6 +1082,9 @@ def test_conv_minutely(self):
assert_equal(ival_T.asfreq('S', 'S'), ival_T_to_S_start)
assert_equal(ival_T.asfreq('S', 'E'), ival_T_to_S_end)
+ assert_equal(ival_T.asfreq('U', 'S'), ival_T_to_U_start)
+ assert_equal(ival_T.asfreq('U', 'E'), ival_T_to_U_end)
+
assert_equal(ival_T.asfreq('Min'), ival_T)
def test_conv_secondly(self):
@@ -1000,21 +1093,32 @@ def test_conv_secondly(self):
ival_S = Period(freq='S', year=2007, month=1, day=1,
hour=0, minute=0, second=0)
ival_S_end_of_year = Period(freq='S', year=2007, month=12, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_quarter = Period(freq='S', year=2007, month=3, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_month = Period(freq='S', year=2007, month=1, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_week = Period(freq='S', year=2007, month=1, day=7,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_day = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_bus = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_hour = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=59, second=59)
+ hour=0, minute=59, second=59,
+ microsecond=999999)
ival_S_end_of_minute = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=59)
+ hour=0, minute=0, second=59,
+ microsecond=999999)
+ ival_S_end_of_second = Period(freq='S', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0,
+ microsecond=999999)
ival_S_to_A = Period(freq='A', year=2007)
ival_S_to_Q = Period(freq='Q', year=2007, quarter=1)
@@ -1022,30 +1126,115 @@ def test_conv_secondly(self):
ival_S_to_W = Period(freq='WK', year=2007, month=1, day=7)
ival_S_to_D = Period(freq='D', year=2007, month=1, day=1)
ival_S_to_B = Period(freq='B', year=2007, month=1, day=1)
- ival_S_to_H = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
- ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ ival_S_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0)
+ ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
+ minute=0)
+ ival_S_to_U = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0, microsecond=999999)
+
+ ival_S_to_U_start = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0, microsecond=0)
+ ival_S_to_U_end = Period(freq='U', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0, microsecond=999999)
assert_equal(ival_S.asfreq('A'), ival_S_to_A)
assert_equal(ival_S_end_of_year.asfreq('A'), ival_S_to_A)
+
assert_equal(ival_S.asfreq('Q'), ival_S_to_Q)
assert_equal(ival_S_end_of_quarter.asfreq('Q'), ival_S_to_Q)
+
assert_equal(ival_S.asfreq('M'), ival_S_to_M)
assert_equal(ival_S_end_of_month.asfreq('M'), ival_S_to_M)
+
assert_equal(ival_S.asfreq('WK'), ival_S_to_W)
assert_equal(ival_S_end_of_week.asfreq('WK'), ival_S_to_W)
+
assert_equal(ival_S.asfreq('D'), ival_S_to_D)
assert_equal(ival_S_end_of_day.asfreq('D'), ival_S_to_D)
+
assert_equal(ival_S.asfreq('B'), ival_S_to_B)
assert_equal(ival_S_end_of_bus.asfreq('B'), ival_S_to_B)
+
assert_equal(ival_S.asfreq('H'), ival_S_to_H)
assert_equal(ival_S_end_of_hour.asfreq('H'), ival_S_to_H)
+
assert_equal(ival_S.asfreq('Min'), ival_S_to_T)
assert_equal(ival_S_end_of_minute.asfreq('Min'), ival_S_to_T)
+ assert_equal(ival_S.asfreq('U'), ival_S_to_U)
+ assert_equal(ival_S_end_of_second.asfreq('U'), ival_S_to_U)
+
+ assert_equal(ival_S.asfreq('U', 'S'), ival_S_to_U_start)
+ assert_equal(ival_S.asfreq('U', 'E'), ival_S_to_U_end)
+
assert_equal(ival_S.asfreq('S'), ival_S)
+ def test_conv_microsecondly(self):
+ ival_U = Period(freq='U', year=2007, month=1, day=1, hour=0, minute=0,
+ second=0, microsecond=0)
+ ival_U_end_of_year = Period(freq='U', year=2007, month=12, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_quarter = Period(freq='U', year=2007, month=3, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_month = Period(freq='U', year=2007, month=1, day=31,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_week = Period(freq='U', year=2007, month=1, day=7,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_day = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_bus = Period(freq='U', year=2007, month=1, day=1,
+ hour=23, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_hour = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=59, second=59,
+ microsecond=999999)
+ ival_U_end_of_minute = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=59,
+ microsecond=999999)
+ ival_U_end_of_second = Period(freq='U', year=2007, month=1, day=1,
+ hour=0, minute=0, second=0,
+ microsecond=999999)
+
+ ival_U_to_A = Period(freq='A', year=2007)
+ ival_U_to_Q = Period(freq='Q', year=2007, quarter=1)
+ ival_U_to_M = Period(freq='M', year=2007, month=1)
+ ival_U_to_W = Period(freq='WK', year=2007, month=1, day=7)
+ ival_U_to_D = Period(freq='D', year=2007, month=1, day=1)
+ ival_U_to_B = Period(freq='B', year=2007, month=1, day=1)
+ ival_U_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0)
+ ival_U_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
+ minute=0)
+ ival_U_to_S = Period(freq='S', year=2007, month=1, day=1, hour=0,
+ minute=0, second=0)
+
+ assert_equal(ival_U.asfreq('A'), ival_U_to_A)
+ assert_equal(ival_U.asfreq('Q'), ival_U_to_Q)
+ assert_equal(ival_U.asfreq('M'), ival_U_to_M)
+ assert_equal(ival_U.asfreq('WK'), ival_U_to_W)
+ assert_equal(ival_U.asfreq('D'), ival_U_to_D)
+ assert_equal(ival_U.asfreq('B'), ival_U_to_B)
+ assert_equal(ival_U.asfreq('H', 'S'), ival_U_to_H)
+ assert_equal(ival_U.asfreq('Min', 'S'), ival_U_to_T)
+ assert_equal(ival_U.asfreq('S'), ival_U_to_S)
+
+ assert_equal(ival_U_end_of_year.asfreq('A'), ival_U_to_A)
+ assert_equal(ival_U_end_of_quarter.asfreq('Q'), ival_U_to_Q)
+ assert_equal(ival_U_end_of_month.asfreq('M'), ival_U_to_M)
+ assert_equal(ival_U_end_of_week.asfreq('WK'), ival_U_to_W)
+ assert_equal(ival_U_end_of_day.asfreq('D'), ival_U_to_D)
+ assert_equal(ival_U_end_of_bus.asfreq('B'), ival_U_to_B)
+ assert_equal(ival_U_end_of_hour.asfreq('H', 'S'), ival_U_to_H)
+ assert_equal(ival_U_end_of_minute.asfreq('Min', 'S'), ival_U_to_T)
+ assert_equal(ival_U_end_of_second.asfreq('S'), ival_U_to_S)
+
+ assert_equal(ival_U.asfreq('U'), ival_U)
+
+
class TestPeriodIndex(TestCase):
def __init__(self, *args, **kwds):
TestCase.__init__(self, *args, **kwds)
@@ -1081,8 +1270,8 @@ def test_constructor_field_arrays(self):
expected = period_range('1990Q3', '2009Q2', freq='Q-DEC')
self.assert_(index.equals(expected))
- self.assertRaises(ValueError, PeriodIndex, year=years, quarter=quarters,
- freq='2Q-DEC')
+ self.assertRaises(ValueError, PeriodIndex, year=years,
+ quarter=quarters, freq='2Q-DEC')
index = PeriodIndex(year=years, quarter=quarters)
self.assert_(index.equals(expected))
@@ -1103,9 +1292,10 @@ def test_constructor_field_arrays(self):
self.assert_(idx.equals(exp))
def test_constructor_U(self):
- # U was used as undefined period
- self.assertRaises(KeyError, period_range, '2007-1-1', periods=500,
- freq='U')
+ pass
+ # # U was used as undefined period
+ # self.assertRaises(KeyError, period_range, '2007-1-1', periods=500,
+ # freq='U')
def test_constructor_arrays_negative_year(self):
years = np.arange(1960, 2000).repeat(4)
@@ -1223,7 +1413,8 @@ def test_sub(self):
self.assert_(result.equals(exp))
def test_periods_number_check(self):
- self.assertRaises(ValueError, period_range, '2011-1-1', '2012-1-1', 'B')
+ self.assertRaises(ValueError, period_range, '2011-1-1', '2012-1-1',
+ 'B')
def test_to_timestamp(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
@@ -1238,7 +1429,6 @@ def test_to_timestamp(self):
result = series.to_timestamp(how='start')
self.assert_(result.index.equals(exp_index))
-
def _get_with_delta(delta, freq='A-DEC'):
return date_range(to_datetime('1/1/2001') + delta,
to_datetime('12/31/2009') + delta, freq=freq)
@@ -1258,6 +1448,12 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = _get_with_delta(delta)
self.assert_(result.index.equals(exp_index))
+ result = series.to_timestamp('U', 'end')
+ delta = timedelta(hours=23, minutes=59, seconds=59,
+ microseconds=999999)
+ exp_index = _get_with_delta(delta)
+ self.assert_(result.index.equals(exp_index))
+
self.assertRaises(ValueError, index.to_timestamp, '5t')
index = PeriodIndex(freq='H', start='1/1/2001', end='1/2/2001')
@@ -1367,6 +1563,12 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = _get_with_delta(delta)
self.assert_(result.index.equals(exp_index))
+ result = df.to_timestamp('U', 'end')
+ delta = timedelta(hours=23, minutes=59, seconds=59,
+ microseconds=999999)
+ exp_index = _get_with_delta(delta)
+ self.assert_(result.index.equals(exp_index))
+
# columns
df = df.T
@@ -1394,6 +1596,12 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = _get_with_delta(delta)
self.assert_(result.columns.equals(exp_index))
+ result = df.to_timestamp('U', 'end', axis=1)
+ delta = timedelta(hours=23, minutes=59, seconds=59,
+ microseconds=999999)
+ exp_index = _get_with_delta(delta)
+ self.assert_(result.columns.equals(exp_index))
+
# invalid axis
self.assertRaises(ValueError, df.to_timestamp, axis=2)
@@ -1441,6 +1649,11 @@ def test_constructor(self):
pi = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 23:59:59')
assert_equal(len(pi), 24 * 60 * 60)
+ pi = self.assertRaises(MemoryError, PeriodIndex, freq='U',
+ start='1/1/2001',
+ end='1/1/2001 23:59:59.999999')
+ # assert_equal(len(pi), 24 * 60 * 60 * 1000000)
+
start = Period('02-Apr-2005', 'B')
i1 = PeriodIndex(start=start, periods=20)
assert_equal(len(i1), 20)
@@ -1476,7 +1689,8 @@ def test_constructor(self):
try:
PeriodIndex(start=start)
- raise AssertionError('Must specify periods if missing start or end')
+ msg = 'Must specify periods if missing start or end'
+ raise AssertionError(msg)
except ValueError:
pass
@@ -1529,6 +1743,12 @@ def test_shift(self):
assert_equal(len(pi1), len(pi2))
assert_equal(pi1.shift(-1).values, pi2.values)
+ # fails on cpcloud's machine due to not enough memory
+ # pi1 = PeriodIndex(freq='U', start='1/1/2001', end='12/1/2009')
+ # pi2 = PeriodIndex(freq='U', start='12/31/2000', end='11/30/2009')
+ # assert_equal(len(pi1), len(pi2))
+ # assert_equal(pi1.shift(-1).values, pi2.values)
+
def test_asfreq(self):
pi1 = PeriodIndex(freq='A', start='1/1/2001', end='1/1/2001')
pi2 = PeriodIndex(freq='Q', start='1/1/2001', end='1/1/2001')
@@ -1537,6 +1757,8 @@ def test_asfreq(self):
pi5 = PeriodIndex(freq='H', start='1/1/2001', end='1/1/2001 00:00')
pi6 = PeriodIndex(freq='Min', start='1/1/2001', end='1/1/2001 00:00')
pi7 = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 00:00:00')
+ pi8 = PeriodIndex(freq='U', start='1/1/2001',
+ end='1/1/2001 00:00:00.000000')
self.assertEquals(pi1.asfreq('Q', 'S'), pi2)
self.assertEquals(pi1.asfreq('Q', 's'), pi2)
@@ -1545,6 +1767,7 @@ def test_asfreq(self):
self.assertEquals(pi1.asfreq('H', 'beGIN'), pi5)
self.assertEquals(pi1.asfreq('Min', 'S'), pi6)
self.assertEquals(pi1.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi1.asfreq('U', 'S'), pi8)
self.assertEquals(pi2.asfreq('A', 'S'), pi1)
self.assertEquals(pi2.asfreq('M', 'S'), pi3)
@@ -1552,6 +1775,7 @@ def test_asfreq(self):
self.assertEquals(pi2.asfreq('H', 'S'), pi5)
self.assertEquals(pi2.asfreq('Min', 'S'), pi6)
self.assertEquals(pi2.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi2.asfreq('U', 'S'), pi8)
self.assertEquals(pi3.asfreq('A', 'S'), pi1)
self.assertEquals(pi3.asfreq('Q', 'S'), pi2)
@@ -1559,6 +1783,7 @@ def test_asfreq(self):
self.assertEquals(pi3.asfreq('H', 'S'), pi5)
self.assertEquals(pi3.asfreq('Min', 'S'), pi6)
self.assertEquals(pi3.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi3.asfreq('U', 'S'), pi8)
self.assertEquals(pi4.asfreq('A', 'S'), pi1)
self.assertEquals(pi4.asfreq('Q', 'S'), pi2)
@@ -1566,6 +1791,7 @@ def test_asfreq(self):
self.assertEquals(pi4.asfreq('H', 'S'), pi5)
self.assertEquals(pi4.asfreq('Min', 'S'), pi6)
self.assertEquals(pi4.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi4.asfreq('U', 'S'), pi8)
self.assertEquals(pi5.asfreq('A', 'S'), pi1)
self.assertEquals(pi5.asfreq('Q', 'S'), pi2)
@@ -1573,6 +1799,7 @@ def test_asfreq(self):
self.assertEquals(pi5.asfreq('D', 'S'), pi4)
self.assertEquals(pi5.asfreq('Min', 'S'), pi6)
self.assertEquals(pi5.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi5.asfreq('U', 'S'), pi8)
self.assertEquals(pi6.asfreq('A', 'S'), pi1)
self.assertEquals(pi6.asfreq('Q', 'S'), pi2)
@@ -1580,6 +1807,7 @@ def test_asfreq(self):
self.assertEquals(pi6.asfreq('D', 'S'), pi4)
self.assertEquals(pi6.asfreq('H', 'S'), pi5)
self.assertEquals(pi6.asfreq('S', 'S'), pi7)
+ self.assertEquals(pi6.asfreq('U', 'S'), pi8)
self.assertEquals(pi7.asfreq('A', 'S'), pi1)
self.assertEquals(pi7.asfreq('Q', 'S'), pi2)
@@ -1587,6 +1815,15 @@ def test_asfreq(self):
self.assertEquals(pi7.asfreq('D', 'S'), pi4)
self.assertEquals(pi7.asfreq('H', 'S'), pi5)
self.assertEquals(pi7.asfreq('Min', 'S'), pi6)
+ self.assertEquals(pi7.asfreq('U', 'S'), pi8)
+
+ self.assertEquals(pi8.asfreq('A', 'S'), pi1)
+ self.assertEquals(pi8.asfreq('Q', 'S'), pi2)
+ self.assertEquals(pi8.asfreq('M', 'S'), pi3)
+ self.assertEquals(pi8.asfreq('D', 'S'), pi4)
+ self.assertEquals(pi8.asfreq('H', 'S'), pi5)
+ self.assertEquals(pi8.asfreq('Min', 'S'), pi6)
+ self.assertEquals(pi8.asfreq('S', 'S'), pi7)
self.assertRaises(ValueError, pi7.asfreq, 'T', 'foo')
self.assertRaises(ValueError, pi1.asfreq, '5t')
@@ -1598,7 +1835,7 @@ def test_ts_repr(self):
def test_frame_index_to_string(self):
index = PeriodIndex(['2011-1', '2011-2', '2011-3'], freq='M')
- frame = DataFrame(np.random.randn(3,4), index=index)
+ frame = DataFrame(np.random.randn(3, 4), index=index)
# it works!
frame.to_string()
@@ -1626,11 +1863,17 @@ def test_badinput(self):
def test_negative_ordinals(self):
p = Period(ordinal=-1000, freq='A')
+ self.assertLess(p.ordinal, 0)
+ self.assertEqual(p.ordinal, -1000)
p = Period(ordinal=0, freq='A')
+ self.assertEqual(p.ordinal, 0)
idx = PeriodIndex(ordinal=[-1, 0, 1], freq='A')
+ assert_equal(idx.values, np.array([-1, 0, 1]))
+
idx = PeriodIndex(ordinal=np.array([-1, 0, 1]), freq='A')
+ assert_equal(idx.values, np.array([-1, 0, 1]))
def test_dti_to_period(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
@@ -1752,7 +1995,7 @@ def test_joins(self):
joined = index.join(index[:-5], how=kind)
self.assert_(isinstance(joined, PeriodIndex))
- self.assert_(joined.freq == index.freq)
+ self.assertEqual(joined.freq, index.freq)
def test_align_series(self):
rng = period_range('1/1/2000', '1/1/2010', freq='A')
@@ -1845,17 +2088,21 @@ def test_fields(self):
self._check_all_fields(pi)
pi = PeriodIndex(freq='S', start='12/31/2001 00:00:00',
- end='12/31/2001 00:05:00')
+ end='12/31/2001 00:00:02')
+ self._check_all_fields(pi)
+
+ pi = PeriodIndex(freq='U', start='12/31/2001 00:00:00.000000',
+ end='12/31/2001 00:00:00.000002')
self._check_all_fields(pi)
end_intv = Period('2006-12-31', 'W')
- i1 = PeriodIndex(end=end_intv, periods=10)
+ pi = PeriodIndex(end=end_intv, periods=10)
self._check_all_fields(pi)
def _check_all_fields(self, periodindex):
fields = ['year', 'month', 'day', 'hour', 'minute',
'second', 'weekofyear', 'week', 'dayofweek',
- 'weekday', 'dayofyear', 'quarter', 'qyear']
+ 'weekday', 'dayofyear', 'quarter', 'qyear', 'microsecond']
periods = list(periodindex)
@@ -1902,7 +2149,7 @@ def test_convert_array_of_periods(self):
def test_with_multi_index(self):
# #1705
- index = date_range('1/1/2012',periods=4,freq='12H')
+ index = date_range('1/1/2012', periods=4, freq='12H')
index_as_arrays = [index.to_period(freq='D'), index.hour]
s = Series([0, 1, 2, 3], index_as_arrays)
@@ -1912,7 +2159,7 @@ def test_with_multi_index(self):
self.assert_(isinstance(s.index.values[0][0], Period))
def test_to_datetime_1703(self):
- index = period_range('1/1/2012',periods=4,freq='D')
+ index = period_range('1/1/2012', periods=4, freq='D')
result = index.to_datetime()
self.assertEquals(result[0], Timestamp('1/1/2012'))
@@ -1929,10 +2176,11 @@ def test_append_concat(self):
s2 = s2.to_period()
# drops index
- result = pd.concat([s1,s2])
+ result = pd.concat([s1, s2])
self.assert_(isinstance(result.index, PeriodIndex))
self.assertEquals(result.index[0], s1.index[0])
+
def _permute(obj):
return obj.take(np.random.permutation(len(obj)))
@@ -1981,13 +2229,16 @@ def test_minutely(self):
def test_secondly(self):
self._check_freq('S', '1970-01-01')
+ def test_microsecondly(self):
+ self._check_freq('U', '1970-01-01')
+
def _check_freq(self, freq, base_date):
rng = PeriodIndex(start=base_date, periods=10, freq=freq)
exp = np.arange(10, dtype=np.int64)
self.assert_(np.array_equal(rng.values, exp))
def test_negone_ordinals(self):
- freqs = ['A', 'M', 'Q', 'D','H', 'T', 'S']
+ freqs = ['A', 'M', 'Q', 'D', 'H', 'T', 'S', 'U']
period = Period(ordinal=-1, freq='D')
for freq in freqs:
@@ -2006,5 +2257,5 @@ def test_negone_ordinals(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index b724e5eb195ab..034ffa9dae549 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -116,7 +116,7 @@ def _convert_f(arg):
try:
if not arg:
return arg
- default = datetime(1,1,1)
+ default = datetime(1, 1, 1)
return parse(arg, dayfirst=dayfirst, default=default)
except Exception:
if errors == 'raise':
@@ -157,17 +157,15 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
datetime, datetime/dateutil.parser._result, str
"""
from pandas.core.config import get_option
- from pandas.tseries.offsets import DateOffset
- from pandas.tseries.frequencies import (_get_rule_month, _month_numbers,
- _get_freq_str)
+ from pandas.tseries.frequencies import _get_rule_month, _month_numbers
if not isinstance(arg, basestring):
return arg
arg = arg.upper()
- default = datetime(1, 1, 1).replace(hour=0, minute=0,
- second=0, microsecond=0)
+ default = datetime(1, 1, 1).replace(hour=0, minute=0, second=0,
+ microsecond=0)
# special handling for possibilities eg, 2Q2005, 2Q05, 2005Q1, 05Q1
if len(arg) in [4, 6]:
@@ -240,6 +238,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
return parsed, parsed, reso # datetime, resolution
+
def dateutil_parse(timestr, default,
ignoretz=False, tzinfos=None,
**kwargs):
@@ -247,7 +246,7 @@ def dateutil_parse(timestr, default,
res = DEFAULTPARSER._parse(StringIO(timestr), **kwargs)
if res is None:
- raise ValueError, "unknown string format"
+ raise ValueError("unknown string format")
repl = {}
for attr in ["year", "month", "day", "hour",
@@ -261,7 +260,7 @@ def dateutil_parse(timestr, default,
ret = default.replace(**repl)
if res.weekday is not None and not res.day:
- ret = ret+relativedelta.relativedelta(weekday=res.weekday)
+ ret += relativedelta.relativedelta(weekday=res.weekday)
if not ignoretz:
if callable(tzinfos) or tzinfos and res.tzname in tzinfos:
if callable(tzinfos):
@@ -275,8 +274,8 @@ def dateutil_parse(timestr, default,
elif isinstance(tzdata, int):
tzinfo = tz.tzoffset(res.tzname, tzdata)
else:
- raise ValueError, "offset must be tzinfo subclass, " \
- "tz string, or int offset"
+ raise ValueError("offset must be tzinfo subclass, tz string, "
+ "or int offset")
ret = ret.replace(tzinfo=tzinfo)
elif res.tzname and res.tzname in time.tzname:
ret = ret.replace(tzinfo=tz.tzlocal())
@@ -286,6 +285,7 @@ def dateutil_parse(timestr, default,
ret = ret.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset))
return ret, reso
+
def _attempt_monthly(val):
pats = ['%Y-%m', '%m-%Y', '%b %Y', '%b-%Y']
for pat in pats:
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index b6e82489177b9..9bc58381346fe 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1,15 +1,17 @@
# cython: profile=False
cimport numpy as np
-from numpy cimport (int32_t, int64_t, import_array, ndarray,
+from numpy cimport (int32_t as i4, import_array, ndarray,
NPY_INT64, NPY_DATETIME)
import numpy as np
+ctypedef int64_t i8
+
from cpython cimport *
# Cython < 0.17 doesn't have this in cpython
-cdef extern from "Python.h":
- cdef PyTypeObject *Py_TYPE(object)
+# cdef extern from "Python.h":
+ # cdef int PyObject_TypeCheck(PyObject*, PyTypeObject*)
from libc.stdlib cimport free
@@ -37,7 +39,7 @@ PyDateTime_IMPORT
# in numpy 1.7, will prob need the following:
# numpy_pydatetime_import
-cdef int64_t NPY_NAT = util.get_nat()
+cdef i8 NPY_NAT = util.get_nat()
try:
@@ -45,7 +47,8 @@ try:
except NameError: # py3
basestring = str
-def ints_to_pydatetime(ndarray[int64_t] arr, tz=None):
+
+cpdef ndarray[object] ints_to_pydatetime(ndarray[i8] arr, tz=None):
cdef:
Py_ssize_t i, n = len(arr)
pandas_datetimestruct dts
@@ -84,10 +87,15 @@ def ints_to_pydatetime(ndarray[int64_t] arr, tz=None):
return result
+
from dateutil.tz import tzlocal
-def _is_tzlocal(tz):
- return isinstance(tz, tzlocal)
+
+cpdef bint _is_tzlocal(object tz):
+ cdef bint isa = PyObject_TypeCheck(tz, <PyTypeObject*> tzlocal)
+ return isa
+
+
# Python front end to C extension type _Timestamp
# This serves as the box for datetime64
@@ -295,9 +303,10 @@ class NaTType(_NaT):
def toordinal(self):
return -1
-fields = ['year', 'quarter', 'month', 'day', 'hour',
- 'minute', 'second', 'microsecond', 'nanosecond',
- 'week', 'dayofyear']
+cdef tuple fields = ('year', 'quarter', 'month', 'day', 'hour',
+ 'minute', 'second', 'microsecond', 'nanosecond',
+ 'week', 'dayofyear')
+
for field in fields:
prop = property(fget=lambda self: -1)
setattr(NaTType, field, prop)
@@ -307,19 +316,24 @@ NaT = NaTType()
iNaT = util.get_nat()
-cdef _tz_format(object obj, object zone):
+
+cdef object _tz_format(object obj, object zone):
try:
return obj.strftime(' %%Z, tz=%s' % zone)
except:
return ', tz=%s' % zone
-def is_timestamp_array(ndarray[object] values):
+
+cpdef bint is_timestamp_array(ndarray[object] values):
cdef int i, n = len(values)
- if n == 0:
+
+ if not n:
return False
+
for i in range(n):
if not is_timestamp(values[i]):
return False
+
return True
@@ -327,10 +341,13 @@ cpdef object get_value_box(ndarray arr, object loc):
cdef:
Py_ssize_t i, sz
void* data_ptr
+
if util.is_float_object(loc):
casted = int(loc)
+
if casted == loc:
loc = casted
+
i = <Py_ssize_t> loc
sz = np.PyArray_SIZE(arr)
@@ -348,23 +365,27 @@ cpdef object get_value_box(ndarray arr, object loc):
#----------------------------------------------------------------------
# Frequency inference
-def unique_deltas(ndarray[int64_t] arr):
+cpdef ndarray[i8] unique_deltas(ndarray[i8] arr):
cdef:
Py_ssize_t i, n = len(arr)
- int64_t val
+ i8 val
khiter_t k
kh_int64_t *table
int ret = 0
list uniques = []
+ ndarray[i8] result
table = kh_init_int64()
kh_resize_int64(table, 10)
+
for i in range(n - 1):
val = arr[i + 1] - arr[i]
k = kh_get_int64(table, val)
+
if k == table.n_buckets:
kh_put_int64(table, val, &ret)
uniques.append(val)
+
kh_destroy_int64(table)
result = np.array(uniques, dtype=np.int64)
@@ -372,14 +393,14 @@ def unique_deltas(ndarray[int64_t] arr):
return result
-cdef inline bint _is_multiple(int64_t us, int64_t mult):
+cdef inline bint _is_multiple(i8 us, i8 mult):
return us % mult == 0
def apply_offset(ndarray[object] values, object offset):
cdef:
Py_ssize_t i, n = len(values)
- ndarray[int64_t] new_values
+ ndarray[i8] new_values
object boxed
result = np.empty(n, dtype='M8[ns]')
@@ -393,7 +414,7 @@ def apply_offset(ndarray[object] values, object offset):
# shadows the python class, where we do any heavy lifting.
cdef class _Timestamp(datetime):
cdef readonly:
- int64_t value, nanosecond
+ i8 value, nanosecond
object offset # frequency reference
def __hash__(self):
@@ -523,7 +544,8 @@ cdef PyTypeObject* ts_type = <PyTypeObject*> Timestamp
cdef inline bint is_timestamp(object o):
- return Py_TYPE(o) == ts_type # isinstance(o, Timestamp)
+ cdef bint isa = PyObject_TypeCheck(o, ts_type)
+ return isa
cdef class _NaT(_Timestamp):
@@ -546,9 +568,7 @@ cdef class _NaT(_Timestamp):
return False
-
-
-def _delta_to_nanoseconds(delta):
+cpdef i8 _delta_to_nanoseconds(delta):
try:
delta = delta.delta
except:
@@ -562,19 +582,21 @@ def _delta_to_nanoseconds(delta):
cdef class _TSObject:
cdef:
pandas_datetimestruct dts # pandas_datetimestruct
- int64_t value # numpy dt64
+ i8 value # numpy dt64
object tzinfo
property value:
def __get__(self):
return self.value
-cpdef _get_utcoffset(tzinfo, obj):
+
+cpdef object _get_utcoffset(object tzinfo, object obj):
try:
return tzinfo._utcoffset
except AttributeError:
return tzinfo.utcoffset(obj)
+
# helper to extract datetime and int64 from several different possibilities
cdef convert_to_tsobject(object ts, object tz):
"""
@@ -660,6 +682,7 @@ cdef convert_to_tsobject(object ts, object tz):
return obj
+
cdef inline void _localize_tso(_TSObject obj, object tz):
if _is_utc(tz):
obj.tzinfo = tz
@@ -689,12 +712,14 @@ cdef inline void _localize_tso(_TSObject obj, object tz):
obj.tzinfo = tz._tzinfos[inf]
-def get_timezone(tz):
+cpdef object get_timezone(object tz):
return _get_zone(tz)
+
cdef inline bint _is_utc(object tz):
return tz is UTC or isinstance(tz, _du_utc)
+
cdef inline object _get_zone(object tz):
if _is_utc(tz):
return 'UTC'
@@ -704,10 +729,10 @@ cdef inline object _get_zone(object tz):
except AttributeError:
return tz
-# cdef int64_t _NS_LOWER_BOUND = -9223285636854775809LL
-# cdef int64_t _NS_UPPER_BOUND = -9223372036854775807LL
+# cdef i8 _NS_LOWER_BOUND = -9223285636854775809LL
+# cdef i8 _NS_UPPER_BOUND = -9223372036854775807LL
-cdef inline _check_dts_bounds(int64_t value, pandas_datetimestruct *dts):
+cdef inline _check_dts_bounds(i8 value, pandas_datetimestruct *dts):
cdef pandas_datetimestruct dts2
if dts.year <= 1677 or dts.year >= 2262:
pandas_datetime_to_datetimestruct(value, PANDAS_FR_ns, &dts2)
@@ -718,27 +743,19 @@ cdef inline _check_dts_bounds(int64_t value, pandas_datetimestruct *dts):
raise ValueError('Out of bounds nanosecond timestamp: %s' % fmt)
-# elif isinstance(ts, _Timestamp):
-# tmp = ts
-# obj.value = (<_Timestamp> ts).value
-# obj.dtval =
-# elif isinstance(ts, object):
-# # If all else fails
-# obj.value = _dtlike_to_datetime64(ts, &obj.dts)
-# obj.dtval = _dts_to_pydatetime(&obj.dts)
-def datetime_to_datetime64(ndarray[object] values):
+cpdef tuple datetime_to_datetime64(ndarray[object] values):
cdef:
Py_ssize_t i, n = len(values)
object val, inferred_tz = None
- ndarray[int64_t] iresult
pandas_datetimestruct dts
_TSObject _ts
+ ndarray result = np.empty(n, dtype='M8[ns]')
+ ndarray[i8] iresult = result.view('i8')
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
for i in range(n):
val = values[i]
+
if util._checknull(val):
iresult[i] = iNaT
elif PyDateTime_Check(val):
@@ -763,12 +780,12 @@ def datetime_to_datetime64(ndarray[object] values):
return result, inferred_tz
-def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
- format=None, utc=None):
+cpdef ndarray array_to_datetime(ndarray[object] values, raise_=False,
+ dayfirst=False, format=None, utc=None):
cdef:
Py_ssize_t i, n = len(values)
object val
- ndarray[int64_t] iresult
+ ndarray[i8] iresult
ndarray[object] oresult
pandas_datetimestruct dts
bint utc_convert = bool(utc)
@@ -846,6 +863,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
return oresult
+
cdef inline _get_datetime64_nanos(object val):
cdef:
pandas_datetimestruct dts
@@ -865,17 +883,16 @@ cdef inline _get_datetime64_nanos(object val):
return ival
-def cast_to_nanoseconds(ndarray arr):
+cpdef ndarray[i8] cast_to_nanoseconds(ndarray arr):
cdef:
Py_ssize_t i, n = arr.size
- ndarray[int64_t] ivalues, iresult
+ ndarray[i8] ivalues, iresult
+ ndarray result
PANDAS_DATETIMEUNIT unit
pandas_datetimestruct dts
-
- shape = (<object> arr).shape
+ tuple shape = (<object> arr).shape
ivalues = arr.view(np.int64).ravel()
-
result = np.empty(shape, dtype='M8[ns]')
iresult = result.ravel().view(np.int64)
@@ -893,23 +910,24 @@ def cast_to_nanoseconds(ndarray arr):
# Conversion routines
-def pydt_to_i8(object pydt):
+cpdef i8 pydt_to_i8(object pydt):
'''
Convert to int64 representation compatible with numpy datetime64; converts
to UTC
'''
cdef:
- _TSObject ts
+ _TSObject ts = convert_to_tsobject(pydt, None)
+ i8 value = ts.value
+
+ return value
- ts = convert_to_tsobject(pydt, None)
- return ts.value
-def i8_to_pydt(int64_t i8, object tzinfo = None):
+def i8_to_pydt(i8 i, object tzinfo = None):
'''
Inverse of pydt_to_i8
'''
- return Timestamp(i8)
+ return Timestamp(i)
#----------------------------------------------------------------------
# time zone conversion helpers
@@ -922,11 +940,12 @@ try:
except:
have_pytz = False
-def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
+
+cpdef ndarray[i8] tz_convert(ndarray[i8] vals, object tz1, object tz2):
cdef:
- ndarray[int64_t] utc_dates, result, trans, deltas
+ ndarray[i8] utc_dates, result, trans, deltas
Py_ssize_t i, pos, n = len(vals)
- int64_t v, offset
+ i8 v, offset
pandas_datetimestruct dts
if not have_pytz:
@@ -997,11 +1016,12 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
return result
-def tz_convert_single(int64_t val, object tz1, object tz2):
+
+cpdef i8 tz_convert_single(i8 val, object tz1, object tz2):
cdef:
- ndarray[int64_t] trans, deltas
+ ndarray[i8] trans, deltas
Py_ssize_t pos
- int64_t v, offset, utc_date
+ i8 v, offset, utc_date
pandas_datetimestruct dts
if not have_pytz:
@@ -1047,6 +1067,7 @@ def tz_convert_single(int64_t val, object tz1, object tz2):
trans_cache = {}
utc_offset_cache = {}
+
def _get_transitions(tz):
"""
Get UTC times of DST transitions
@@ -1071,6 +1092,7 @@ def _get_transitions(tz):
trans_cache[tz] = arr
return trans_cache[tz]
+
def _get_deltas(tz):
"""
Get UTC offsets in microseconds corresponding to DST transitions
@@ -1092,30 +1114,31 @@ def _get_deltas(tz):
return utc_offset_cache[tz]
-cdef double total_seconds(object td): # Python 2.6 compat
+
+cdef i8 total_seconds(object td): # Python 2.6 compat
return ((td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) //
10**6)
-def tot_seconds(td):
+
+cpdef i8 tot_seconds(object td):
return total_seconds(td)
-cpdef ndarray _unbox_utcoffsets(object transinfo):
- cdef:
- Py_ssize_t i, sz
- ndarray[int64_t] arr
- sz = len(transinfo)
- arr = np.empty(sz, dtype='i8')
+cpdef ndarray[i8] _unbox_utcoffsets(object transinfo):
+ cdef:
+ Py_ssize_t i, sz = len(transinfo)
+ ndarray[i8] arr = np.empty(sz, dtype='i8')
+ i8 billion = 1000000000
for i in range(sz):
- arr[i] = int(total_seconds(transinfo[i][0])) * 1000000000
+ arr[i] = total_seconds(transinfo[i][0]) * billion
return arr
@cython.boundscheck(False)
@cython.wraparound(False)
-def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
+cpdef ndarray[i8] tz_localize_to_utc(ndarray[i8] vals, object tz):
"""
Localize tzinfo-naive DateRange to given time zone (using pytz). If
there are ambiguities in the values, raise AmbiguousTimeError.
@@ -1125,11 +1148,11 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
localized : DatetimeIndex
"""
cdef:
- ndarray[int64_t] trans, deltas, idx_shifted
+ ndarray[i8] trans, deltas, idx_shifted
Py_ssize_t i, idx, pos, ntrans, n = len(vals)
- int64_t *tdata
- int64_t v, left, right
- ndarray[int64_t] result, result_a, result_b
+ i8 *tdata
+ i8 v, left, right
+ ndarray[i8] result, result_a, result_b
pandas_datetimestruct dts
# Vectorized version of DstTzInfo.localize
@@ -1155,7 +1178,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
trans = _get_transitions(tz) # transition dates
deltas = _get_deltas(tz) # utc offsets
- tdata = <int64_t*> trans.data
+ tdata = <i8*> trans.data
ntrans = len(trans)
result_a = np.empty(n, dtype=np.int64)
@@ -1206,7 +1229,8 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz):
return result
-cdef _ensure_int64(object arr):
+
+cdef ndarray[i8] _ensure_int64(object arr):
if util.is_array(arr):
if (<ndarray> arr).descr.type_num == NPY_INT64:
return arr
@@ -1216,7 +1240,7 @@ cdef _ensure_int64(object arr):
return np.array(arr, dtype=np.int64)
-cdef inline bisect_right_i8(int64_t *data, int64_t val, Py_ssize_t n):
+cdef inline Py_ssize_t bisect_right_i8(i8 *data, i8 val, Py_ssize_t n):
cdef Py_ssize_t pivot, left = 0, right = n
# edge cases
@@ -1239,8 +1263,7 @@ cdef inline bisect_right_i8(int64_t *data, int64_t val, Py_ssize_t n):
# Accessors
#----------------------------------------------------------------------
-
-def build_field_sarray(ndarray[int64_t] dtindex):
+def build_field_sarray(ndarray[i8] dtindex):
'''
Datetime as int64 representation to a structured array of fields
'''
@@ -1248,7 +1271,7 @@ def build_field_sarray(ndarray[int64_t] dtindex):
Py_ssize_t i, count = 0
int isleap
pandas_datetimestruct dts
- ndarray[int32_t] years, months, days, hours, minutes, seconds, mus
+ ndarray[i4] years, months, days, hours, minutes, seconds, mus
count = len(dtindex)
@@ -1282,26 +1305,29 @@ def build_field_sarray(ndarray[int64_t] dtindex):
return out
-def get_time_micros(ndarray[int64_t] dtindex):
+
+cpdef ndarray[i8] get_time_micros(ndarray[i8] dtindex):
'''
Datetime as int64 representation to a structured array of fields
'''
cdef:
Py_ssize_t i, n = len(dtindex)
pandas_datetimestruct dts
- ndarray[int64_t] micros
-
- micros = np.empty(n, dtype=np.int64)
+ ndarray[i8] micros = np.empty(n, dtype=np.int64)
+ i8 secs_per_min = 60
+ i8 secs_per_hour = secs_per_min * secs_per_min
+ i8 us_per_sec = 1000000
for i in range(n):
pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
- micros[i] = 1000000LL * (dts.hour * 60 * 60 +
- 60 * dts.min + dts.sec) + dts.us
+ micros[i] = us_per_sec * (dts.hour * secs_per_hour + secs_per_min *
+ dts.min + dts.sec) + dts.us
return micros
+
@cython.wraparound(False)
-def get_date_field(ndarray[int64_t] dtindex, object field):
+def get_date_field(ndarray[i8] dtindex, object field):
'''
Given a int64-based datetime index, extract the year, month, etc.,
field and return an array of these values.
@@ -1309,8 +1335,8 @@ def get_date_field(ndarray[int64_t] dtindex, object field):
cdef:
_TSObject ts
Py_ssize_t i, count = 0
- ndarray[int32_t] out
- ndarray[int32_t, ndim=2] _month_offset
+ ndarray[i8] out
+ ndarray[i4, ndim=2] _month_offset
int isleap
pandas_datetimestruct dts
@@ -1320,11 +1346,13 @@ def get_date_field(ndarray[int64_t] dtindex, object field):
dtype=np.int32 )
count = len(dtindex)
- out = np.empty(count, dtype='i4')
+ out = np.empty(count, dtype='i8')
if field == 'Y':
for i in range(count):
- if dtindex[i] == NPY_NAT: out[i] = -1; continue
+ if dtindex[i] == NPY_NAT:
+ out[i] = -1
+ continue
pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
out[i] = dts.year
@@ -1423,19 +1451,20 @@ def get_date_field(ndarray[int64_t] dtindex, object field):
raise ValueError("Field %s not supported" % field)
-cdef inline int m8_weekday(int64_t val):
+cdef inline int m8_weekday(i8 val):
ts = convert_to_tsobject(val, None)
return ts_dayofweek(ts)
-cdef int64_t DAY_NS = 86400000000000LL
+cdef i8 DAY_NS = 86400000000000LL
-def date_normalize(ndarray[int64_t] stamps, tz=None):
+
+cpdef ndarray[i8] date_normalize(ndarray[i8] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
pandas_datetimestruct dts
_TSObject tso
- ndarray[int64_t] result = np.empty(n, dtype=np.int64)
+ ndarray[i8] result = np.empty(n, dtype=np.int64)
if tz is not None:
tso = _TSObject()
@@ -1452,11 +1481,12 @@ def date_normalize(ndarray[int64_t] stamps, tz=None):
return result
-cdef _normalize_local(ndarray[int64_t] stamps, object tz):
+
+cdef ndarray[i8] _normalize_local(ndarray[i8] stamps, object tz):
cdef:
Py_ssize_t n = len(stamps)
- ndarray[int64_t] result = np.empty(n, dtype=np.int64)
- ndarray[int64_t] trans, deltas, pos
+ ndarray[i8] result = np.empty(n, dtype=np.int64)
+ ndarray[i8] trans, deltas, pos
pandas_datetimestruct dts
if _is_utc(tz):
@@ -1508,7 +1538,8 @@ cdef _normalize_local(ndarray[int64_t] stamps, object tz):
return result
-cdef inline int64_t _normalized_stamp(pandas_datetimestruct *dts):
+
+cdef inline i8 _normalized_stamp(pandas_datetimestruct *dts):
dts.hour = 0
dts.min = 0
dts.sec = 0
@@ -1516,7 +1547,7 @@ cdef inline int64_t _normalized_stamp(pandas_datetimestruct *dts):
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, dts)
-cdef inline void m8_populate_tsobject(int64_t stamp, _TSObject tso, object tz):
+cdef inline void m8_populate_tsobject(i8 stamp, _TSObject tso, object tz):
tso.value = stamp
pandas_datetime_to_datetimestruct(tso.value, PANDAS_FR_ns, &tso.dts)
@@ -1524,7 +1555,7 @@ cdef inline void m8_populate_tsobject(int64_t stamp, _TSObject tso, object tz):
_localize_tso(tso, tz)
-def dates_normalized(ndarray[int64_t] stamps, tz=None):
+cpdef bint dates_normalized(ndarray[i8] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
pandas_datetimestruct dts
@@ -1562,26 +1593,29 @@ def dates_normalized(ndarray[int64_t] stamps, tz=None):
# Some general helper functions
#----------------------------------------------------------------------
-def isleapyear(int64_t year):
+cpdef bint isleapyear(i8 year):
return is_leapyear(year)
-def monthrange(int64_t year, int64_t month):
+
+cpdef tuple monthrange(i8 year, i8 month):
cdef:
- int64_t days
- int64_t day_of_week
+ i8 days
+ i8 day_of_week
if month < 1 or month > 12:
raise ValueError("bad month number 0; must be 1-12")
- days = days_per_month_table[is_leapyear(year)][month-1]
+ days = days_per_month_table[is_leapyear(year)][month - 1]
+
+ return dayofweek(year, month, 1), days
- return (dayofweek(year, month, 1), days)
-cdef inline int64_t ts_dayofweek(_TSObject ts):
- return dayofweek(ts.dts.year, ts.dts.month, ts.dts.day)
+cdef inline i8 ts_dayofweek(_TSObject ts):
+ cdef i8 result = dayofweek(ts.dts.year, ts.dts.month, ts.dts.day)
+ return result
-cpdef normalize_date(object dt):
+cpdef object normalize_date(object dt):
'''
Normalize datetime.datetime value to midnight. Returns datetime.date as a
datetime.datetime at midnight
@@ -1597,12 +1631,13 @@ cpdef normalize_date(object dt):
else:
raise TypeError('Unrecognized type: %s' % type(dt))
-cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
- int freq, object tz):
+
+cdef ndarray[i8] localize_dt64arr_to_period(ndarray[i8] stamps, int freq,
+ object tz):
cdef:
Py_ssize_t n = len(stamps)
- ndarray[int64_t] result = np.empty(n, dtype=np.int64)
- ndarray[int64_t] trans, deltas, pos
+ ndarray[i8] result = np.empty(n, dtype=np.int64)
+ ndarray[i8] trans, deltas, pos
pandas_datetimestruct dts
if not have_pytz:
@@ -1615,7 +1650,8 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
continue
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec, dts.us,
+ freq)
elif _is_tzlocal(tz):
for i in range(n):
@@ -1630,7 +1666,8 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
pandas_datetime_to_datetimestruct(stamps[i] + delta,
PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec, dts.us,
+ freq)
else:
# Adjust datetime64 timestamp, recompute datetimestruct
trans = _get_transitions(tz)
@@ -1649,7 +1686,9 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
pandas_datetime_to_datetimestruct(stamps[i] + deltas[0],
PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec,
+ dts.us,
+ freq)
else:
for i in range(n):
if stamps[i] == NPY_NAT:
@@ -1658,70 +1697,76 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
pandas_datetime_to_datetimestruct(stamps[i] + deltas[pos[i]],
PANDAS_FR_ns, &dts)
result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec,
+ dts.us, freq)
return result
cdef extern from "period.h":
ctypedef struct date_info:
- int64_t absdate
- double abstime
- double second
- int minute
- int hour
- int day
- int month
- int quarter
- int year
- int day_of_week
- int day_of_year
- int calendar
+ i8 absdate
+ i8 abstime
+
+ i8 attosecond
+ i8 femtosecond
+ i8 picosecond
+ i8 nanosecond
+ i8 microsecond
+ i8 second
+ i8 minute
+ i8 hour
+ i8 day
+ i8 month
+ i8 quarter
+ i8 year
+ i8 day_of_week
+ i8 day_of_year
+ i8 calendar
ctypedef struct asfreq_info:
- int from_week_end
- int to_week_end
-
- int from_a_year_end
- int to_a_year_end
-
- int from_q_year_end
- int to_q_year_end
-
- ctypedef int64_t (*freq_conv_func)(int64_t, char, asfreq_info*)
-
- int64_t asfreq(int64_t dtordinal, int freq1, int freq2, char relation) except INT32_MIN
- freq_conv_func get_asfreq_func(int fromFreq, int toFreq)
- void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info)
-
- int64_t get_period_ordinal(int year, int month, int day,
- int hour, int minute, int second,
- int freq) except INT32_MIN
-
- int64_t get_python_ordinal(int64_t period_ordinal, int freq) except INT32_MIN
-
- int get_date_info(int64_t ordinal, int freq, date_info *dinfo) except INT32_MIN
- double getAbsTime(int, int64_t, int64_t)
-
- int pyear(int64_t ordinal, int freq) except INT32_MIN
- int pqyear(int64_t ordinal, int freq) except INT32_MIN
- int pquarter(int64_t ordinal, int freq) except INT32_MIN
- int pmonth(int64_t ordinal, int freq) except INT32_MIN
- int pday(int64_t ordinal, int freq) except INT32_MIN
- int pweekday(int64_t ordinal, int freq) except INT32_MIN
- int pday_of_week(int64_t ordinal, int freq) except INT32_MIN
- int pday_of_year(int64_t ordinal, int freq) except INT32_MIN
- int pweek(int64_t ordinal, int freq) except INT32_MIN
- int phour(int64_t ordinal, int freq) except INT32_MIN
- int pminute(int64_t ordinal, int freq) except INT32_MIN
- int psecond(int64_t ordinal, int freq) except INT32_MIN
+ i8 from_week_end
+ i8 to_week_end
+
+ i8 from_a_year_end
+ i8 to_a_year_end
+
+ i8 from_q_year_end
+ i8 to_q_year_end
+
+ ctypedef i8 (*freq_conv_func)(i8, char*, asfreq_info*)
+
+ i8 asfreq(i8 ordinal, i8 freq1, i8 freq2, char* relation) except INT64_MIN
+ freq_conv_func get_asfreq_func(i8 fromFreq, i8 toFreq)
+ void get_asfreq_info(i8 fromFreq, i8 toFreq, asfreq_info *af_info)
+
+ i8 get_period_ordinal(i8 year, i8 month, i8 day, i8 hour, i8 minute,
+ i8 second, i8 microsecond, i8 freq) except INT64_MIN
+
+ i8 get_python_ordinal(i8 period_ordinal, i8 freq) except INT64_MIN
+
+ i8 get_date_info(i8 ordinal, i8 freq, date_info *dinfo) except INT64_MIN
+
+ i8 pyear(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pqyear(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pquarter(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pmonth(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pday(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pweekday(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pday_of_week(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pday_of_year(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pweek(i8 ordinal, i8 freq) except INT64_MIN
+ i8 phour(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pminute(i8 ordinal, i8 freq) except INT64_MIN
+ i8 psecond(i8 ordinal, i8 freq) except INT64_MIN
+ i8 pmicrosecond(i8 ordinal, i8 freq) except INT64_MIN
char *c_strftime(date_info *dinfo, char *fmt)
- int get_yq(int64_t ordinal, int freq, int *quarter, int *year)
+ i8 get_yq(i8 ordinal, i8 freq, i8 *quarter, i8 *year)
# Period logic
#----------------------------------------------------------------------
-cdef inline int64_t apply_mult(int64_t period_ord, int64_t mult):
+cdef inline i8 apply_mult(i8 period_ord, i8 mult):
"""
Get freq+multiple ordinal value from corresponding freq-only ordinal value.
For example, 5min ordinal will be 1/5th the 1min ordinal (rounding down to
@@ -1732,7 +1777,8 @@ cdef inline int64_t apply_mult(int64_t period_ord, int64_t mult):
return (period_ord - 1) // mult
-cdef inline int64_t remove_mult(int64_t period_ord_w_mult, int64_t mult):
+
+cdef inline i8 remove_mult(i8 period_ord_w_mult, i8 mult):
"""
Get freq-only ordinal value from corresponding freq+multiple ordinal.
"""
@@ -1741,13 +1787,15 @@ cdef inline int64_t remove_mult(int64_t period_ord_w_mult, int64_t mult):
return period_ord_w_mult * mult + 1;
-def dt64arr_to_periodarr(ndarray[int64_t] dtarr, int freq, tz=None):
+
+cpdef ndarray[i8] dt64arr_to_periodarr(ndarray[i8] dtarr, int freq,
+ object tz=None):
"""
Convert array of datetime64 values (passed in as 'i8' dtype) to a set of
periods corresponding to desired frequency, per period convention.
"""
cdef:
- ndarray[int64_t] out
+ ndarray[i8] out
Py_ssize_t i, l
pandas_datetimestruct dts
@@ -1759,63 +1807,54 @@ def dt64arr_to_periodarr(ndarray[int64_t] dtarr, int freq, tz=None):
for i in range(l):
pandas_datetime_to_datetimestruct(dtarr[i], PANDAS_FR_ns, &dts)
out[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
+ dts.hour, dts.min, dts.sec, dts.us,
+ freq)
else:
out = localize_dt64arr_to_period(dtarr, freq, tz)
return out
-def periodarr_to_dt64arr(ndarray[int64_t] periodarr, int freq):
+
+cpdef ndarray[i8] periodarr_to_dt64arr(ndarray[i8] periodarr, int freq):
"""
Convert array to datetime64 values from a set of ordinals corresponding to
periods per period convention.
"""
cdef:
- ndarray[int64_t] out
- Py_ssize_t i, l
-
- l = len(periodarr)
-
- out = np.empty(l, dtype='i8')
+ Py_ssize_t i, l = len(periodarr)
+ ndarray[i8] out = np.empty(l, dtype='i8')
for i in range(l):
out[i] = period_ordinal_to_dt64(periodarr[i], freq)
return out
-cdef char START = 'S'
-cdef char END = 'E'
-cpdef int64_t period_asfreq(int64_t period_ordinal, int freq1, int freq2,
- bint end):
+cpdef i8 period_asfreq(i8 period_ordinal, int freq1, int freq2,
+ char* relation):
"""
Convert period ordinal from one frequency to another, and if upsampling,
choose to use start ('S') or end ('E') of period.
"""
- cdef:
- int64_t retval
+ cdef i8 retval = asfreq(period_ordinal, freq1, freq2, relation)
- if end:
- retval = asfreq(period_ordinal, freq1, freq2, END)
- else:
- retval = asfreq(period_ordinal, freq1, freq2, START)
-
- if retval == INT32_MIN:
+ if retval == INT64_MIN:
raise ValueError('Frequency conversion failed')
return retval
-def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end):
+
+cpdef ndarray[i8] period_asfreq_arr(ndarray[i8] arr, int freq1, int freq2,
+ char* relation):
"""
Convert int64-array of period ordinals from one frequency to another, and
if upsampling, choose to use start ('S') or end ('E') of period.
"""
cdef:
- ndarray[int64_t] result
+ ndarray[i8] result
Py_ssize_t i, n
freq_conv_func func
asfreq_info finfo
- int64_t val, ordinal
- char relation
+ i8 val, ordinal
n = len(arr)
result = np.empty(n, dtype=np.int64)
@@ -1823,27 +1862,21 @@ def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end):
func = get_asfreq_func(freq1, freq2)
get_asfreq_info(freq1, freq2, &finfo)
- if end:
- relation = END
- else:
- relation = START
-
for i in range(n):
val = func(arr[i], relation, &finfo)
- if val == INT32_MIN:
+ if val == INT64_MIN:
raise ValueError("Unable to convert to desired frequency.")
result[i] = val
return result
-def period_ordinal(int y, int m, int d, int h, int min, int s, int freq):
- cdef:
- int64_t ordinal
- return get_period_ordinal(y, m, d, h, min, s, freq)
+cpdef i8 period_ordinal(i8 y, i8 m, i8 d, i8 h, i8 min, i8 s, i8 us, i8 freq):
+ cdef i8 ordinal = get_period_ordinal(y, m, d, h, min, s, us, freq)
+ return ordinal
-cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq):
+cpdef i8 period_ordinal_to_dt64(i8 ordinal, int freq):
cdef:
pandas_datetimestruct dts
date_info dinfo
@@ -1855,12 +1888,14 @@ cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq):
dts.day = dinfo.day
dts.hour = dinfo.hour
dts.min = dinfo.minute
- dts.sec = int(dinfo.second)
- dts.us = dts.ps = 0
+ dts.sec = dinfo.second
+ dts.us = dinfo.microsecond
+ dts.ps = 0
return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
-def period_format(int64_t value, int freq, object fmt=None):
+
+cpdef object period_format(i8 value, int freq, object fmt=None):
cdef:
int freq_group
@@ -1873,8 +1908,8 @@ def period_format(int64_t value, int freq, object fmt=None):
elif freq_group == 3000: # FR_MTH
fmt = b'%Y-%m'
elif freq_group == 4000: # WK
- left = period_asfreq(value, freq, 6000, 0)
- right = period_asfreq(value, freq, 6000, 1)
+ left = period_asfreq(value, freq, 6000, 'S')
+ right = period_asfreq(value, freq, 6000, 'E')
return '%s/%s' % (period_format(left, 6000),
period_format(right, 6000))
elif (freq_group == 5000 # BUS
@@ -1886,26 +1921,31 @@ def period_format(int64_t value, int freq, object fmt=None):
fmt = b'%Y-%m-%d %H:%M'
elif freq_group == 9000: # SEC
fmt = b'%Y-%m-%d %H:%M:%S'
+ elif freq_group == 10000: # USEC
+ fmt = b'%Y-%m-%d %H:%M:%S.%u'
else:
raise ValueError('Unknown freq: %d' % freq)
return _period_strftime(value, freq, fmt)
-cdef list extra_fmts = [(b"%q", b"^`AB`^"),
- (b"%f", b"^`CD`^"),
- (b"%F", b"^`EF`^")]
+cdef tuple extra_fmts = ((b"%q", b"^`AB`^"),
+ (b"%f", b"^`CD`^"),
+ (b"%F", b"^`EF`^"),
+ (b"%u", b"^`GH`^"))
+
-cdef list str_extra_fmts = ["^`AB`^", "^`CD`^", "^`EF`^"]
+cdef tuple str_extra_fmts = ("^`AB`^", "^`CD`^", "^`EF`^", "^`GH`^")
-cdef _period_strftime(int64_t value, int freq, object fmt):
+
+cdef object _period_strftime(i8 value, int freq, object fmt):
cdef:
Py_ssize_t i
date_info dinfo
char *formatted
object pat, repl, result
list found_pat = [False] * len(extra_fmts)
- int year, quarter
+ i8 year, quarter
if PyUnicode_Check(fmt):
fmt = fmt.encode('utf-8')
@@ -1934,6 +1974,8 @@ cdef _period_strftime(int64_t value, int freq, object fmt):
repl = '%.2d' % (year % 100)
elif i == 2:
repl = '%d' % year
+ elif i == 3:
+ repl = '%.6u' % dinfo.microsecond
result = result.replace(str_extra_fmts[i], repl)
@@ -1945,22 +1987,19 @@ cdef _period_strftime(int64_t value, int freq, object fmt):
# period accessors
-ctypedef int (*accessor)(int64_t ordinal, int freq) except INT32_MIN
+ctypedef i8 (*accessor)(i8 ordinal, i8 freq) except INT64_MIN
+
-def get_period_field(int code, int64_t value, int freq):
+cpdef i8 get_period_field(int code, i8 value, int freq):
cdef accessor f = _get_accessor_func(code)
return f(value, freq)
-def get_period_field_arr(int code, ndarray[int64_t] arr, int freq):
- cdef:
- Py_ssize_t i, sz
- ndarray[int64_t] out
- accessor f
-
- f = _get_accessor_func(code)
- sz = len(arr)
- out = np.empty(sz, dtype=np.int64)
+cpdef ndarray[i8] get_period_field_arr(int code, ndarray[i8] arr, int freq):
+ cdef:
+ Py_ssize_t i, sz = len(arr)
+ ndarray[i8] out = np.empty(sz, dtype=np.int64)
+ accessor f = _get_accessor_func(code)
for i in range(sz):
out[i] = f(arr[i], freq)
@@ -1968,7 +2007,6 @@ def get_period_field_arr(int code, ndarray[int64_t] arr, int freq):
return out
-
cdef accessor _get_accessor_func(int code):
if code == 0:
return &pyear
@@ -1992,14 +2030,16 @@ cdef accessor _get_accessor_func(int code):
return &pday_of_year
elif code == 10:
return &pweekday
+ elif code == 11:
+ return &pmicrosecond
else:
raise ValueError('Unrecognized code: %s' % code)
-def extract_ordinals(ndarray[object] values, freq):
+cpdef ndarray[i8] extract_ordinals(ndarray[object] values, freq):
cdef:
Py_ssize_t i, n = len(values)
- ndarray[int64_t] ordinals = np.empty(n, dtype=np.int64)
+ ndarray[i8] ordinals = np.empty(n, dtype=np.int64)
object p
for i in range(n):
@@ -2010,7 +2050,8 @@ def extract_ordinals(ndarray[object] values, freq):
return ordinals
-cpdef resolution(ndarray[int64_t] stamps, tz=None):
+
+cpdef int resolution(ndarray[i8] stamps, object tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
pandas_datetimestruct dts
@@ -2019,23 +2060,29 @@ cpdef resolution(ndarray[int64_t] stamps, tz=None):
if tz is not None:
if isinstance(tz, basestring):
tz = pytz.timezone(tz)
+
return _reso_local(stamps, tz)
else:
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
+
if curr_reso < reso:
reso = curr_reso
+
return reso
+
US_RESO = 0
S_RESO = 1
T_RESO = 2
H_RESO = 3
D_RESO = 4
+
cdef inline int _reso_stamp(pandas_datetimestruct *dts):
if dts.us != 0:
return US_RESO
@@ -2047,30 +2094,35 @@ cdef inline int _reso_stamp(pandas_datetimestruct *dts):
return H_RESO
return D_RESO
-cdef _reso_local(ndarray[int64_t] stamps, object tz):
+
+cdef int _reso_local(ndarray[i8] stamps, object tz):
cdef:
Py_ssize_t n = len(stamps)
int reso = D_RESO, curr_reso
- ndarray[int64_t] trans, deltas, pos
+ ndarray[i8] trans, deltas, pos
pandas_datetimestruct dts
+ i8 billion = 1000000000
if _is_utc(tz):
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
+
if curr_reso < reso:
reso = curr_reso
elif _is_tzlocal(tz):
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns,
&dts)
dt = datetime(dts.year, dts.month, dts.day, dts.hour,
dts.min, dts.sec, dts.us, tz)
- delta = int(total_seconds(_get_utcoffset(tz, dt))) * 1000000000
+ delta = total_seconds(_get_utcoffset(tz, dt)) * billion
pandas_datetime_to_datetimestruct(stamps[i] + delta,
PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
@@ -2081,8 +2133,10 @@ cdef _reso_local(ndarray[int64_t] stamps, object tz):
trans = _get_transitions(tz)
deltas = _get_deltas(tz)
_pos = trans.searchsorted(stamps, side='right') - 1
+
if _pos.dtype != np.int64:
_pos = _pos.astype(np.int64)
+
pos = _pos
# statictzinfo
@@ -2099,9 +2153,11 @@ cdef _reso_local(ndarray[int64_t] stamps, object tz):
for i in range(n):
if stamps[i] == NPY_NAT:
continue
+
pandas_datetime_to_datetimestruct(stamps[i] + deltas[pos[i]],
PANDAS_FR_ns, &dts)
curr_reso = _reso_stamp(&dts)
+
if curr_reso < reso:
reso = curr_reso
| https://api.github.com/repos/pandas-dev/pandas/pulls/2662 | 2013-01-08T22:45:53Z | 2013-01-08T23:53:28Z | null | 2014-07-23T16:21:00Z | |
missing imports in tseries/tools/dateutil_parse #2658 | diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 7ecf33d7e781a..1044f25b151ce 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1399,6 +1399,14 @@ def _simple_ts(start, end, freq='D'):
class TestDatetimeIndex(unittest.TestCase):
_multiprocess_can_split_ = True
+ def test_stringified_slice_with_tz(self):
+ #GH2658
+ import datetime
+ start=datetime.datetime.now()
+ idx=DatetimeIndex(start=start,freq="1d",periods=10)
+ df=DataFrame(range(10),index=idx)
+ df["2013-01-14 23:44:34.437768-05:00":] # no exception here
+
def test_append_join_nondatetimeindex(self):
rng = date_range('1/1/2000', periods=10)
idx = Index(['a', 'b', 'c', 'd'])
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index 671769138d21e..5843f3edc4b39 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -245,6 +245,9 @@ def dateutil_parse(timestr, default,
ignoretz=False, tzinfos=None,
**kwargs):
""" lifted from dateutil to get resolution"""
+ from dateutil import tz
+ import time
+
res = DEFAULTPARSER._parse(StringIO(timestr), **kwargs)
if res is None:
| closes #2658
| https://api.github.com/repos/pandas-dev/pandas/pulls/2661 | 2013-01-08T22:16:57Z | 2013-01-10T23:31:31Z | 2013-01-10T23:31:31Z | 2014-07-02T12:26:03Z |
DOC: Add docstring to the 'io.sql.has_table' function. | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 61fba520c3966..75f5db35889d5 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -178,6 +178,15 @@ def write_frame(frame, name=None, con=None, flavor='sqlite', append=False):
def has_table(name, con):
+ """
+ Check if a given SQLite table exists.
+
+ Parameters
+ ----------
+ name: string
+ SQLite table name
+ con: DB connection object
+ """
sqlstr = "SELECT name FROM sqlite_master WHERE type='table' AND name='%s'" % name
rs = tquery(sqlstr, con)
return len(rs) > 0
| Hi,
Just add docstring to a function in the `io.sql` module.
##
Damien G.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2653 | 2013-01-07T13:11:39Z | 2013-01-19T23:41:30Z | null | 2014-07-07T14:09:03Z |
Writing doc about I/O SQL features. See #2541 | diff --git a/doc/source/io.rst b/doc/source/io.rst
index c73240725887f..6abaee8f8f070 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1251,3 +1251,102 @@ These, by default, index the three axes ``items, major_axis, minor_axis``. On an
store.close()
import os
os.remove('store.h5')
+
+
+.. _io.sql:
+
+SQL Queries
+-----------
+
+The :mod:`pandas.io.sql` module provides a collection of query wrappers to both
+facilitate data retrieval and to reduce dependency on DB-specific API. There
+wrappers only support the Python database adapters which respect the `Python
+DB-API <http://www.python.org/dev/peps/pep-0249/>`_.
+
+Suppose you want to query some data with different types from a table such as:
+
++-----+------------+-------+-------+-------+
+| id | Date | Col_1 | Col_2 | Col_3 |
++=====+============+=======+=======+=======+
+| 26 | 2012-10-18 | X | 25.7 | True |
++-----+------------+-------+-------+-------+
+| 42 | 2012-10-19 | Y | -12.4 | False |
++-----+------------+-------+-------+-------+
+| 63 | 2012-10-20 | Z | 5.73 | True |
++-----+------------+-------+-------+-------+
+
+Functions from :mod:`pandas.io.sql` can extract some data into a DataFrame. In
+the following example, we use `SQlite <http://www.sqlite.org/>`_ SQL database
+engine. You can use a temporary SQLite database where data are stored in
+"memory". Just do:
+
+.. code-block:: python
+
+ import sqlite3
+ from pandas.io import sql
+ # Create your connection.
+ cnx = sqlite3.connect(':memory:')
+
+.. ipython:: python
+ :suppress:
+
+ import sqlite3
+ from pandas.io import sql
+ cnx = sqlite3.connect(':memory:')
+
+.. ipython:: python
+ :suppress:
+
+ cu = cnx.cursor()
+ # Create a table named 'data'.
+ cu.execute("""CREATE TABLE data(id integer,
+ date date,
+ Col_1 string,
+ Col_2 float,
+ Col_3 bool);""")
+ cu.executemany('INSERT INTO data VALUES (?,?,?,?,?)',
+ [(26, datetime(2010,10,18), 'X', 27.5, True),
+ (42, datetime(2010,10,19), 'Y', -12.5, False),
+ (63, datetime(2010,10,20), 'Z', 5.73, True)])
+
+
+Let ``data`` be the name of your SQL table. With a query and your database
+connection, just use the :func:`~pandas.io.sql.read_frame` function to get the
+query results into a DataFrame:
+
+.. ipython:: python
+
+ sql.read_frame("SELECT * FROM data;", cnx)
+
+You can also specify the name of the column as the DataFrame index:
+
+.. ipython:: python
+
+ sql.read_frame("SELECT * FROM data;", cnx, index_col='id')
+ sql.read_frame("SELECT * FROM data;", cnx, index_col='date')
+
+Of course, you can specify more "complex" query.
+
+.. ipython:: python
+
+ sql.read_frame("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", cnx)
+
+.. ipython:: python
+ :suppress:
+
+ cu.close()
+ cnx.close()
+
+
+There are a few other available functions:
+
+ - ``tquery`` returns list of tuples corresponding to each row.
+ - ``uquery`` does the same thing as tquery, but instead of returning results,
+ it returns the number of related rows.
+ - ``write_frame`` writes records stored in a DataFrame into the SQL table.
+ - ``has_table`` checks if a given SQLite table exists.
+
+.. note::
+
+ For now, writing your DataFrame into a database works only with
+ **SQLite**. Moreover, the **index** will currently be **dropped**.
| Hi there,
I wrote some contents about I/O SQL pandas features:
- some examples about the `read_frame` function with SQLite ;
- list other functions such as `tquery` (without example) ;
- write a tiny note about `write_frame` function.
I'm not sure about the title of this section. Any proposal is welcome ! Don't hesitate to comment the content (typo, misunderstanding, ...)
## Cheers
Damien G.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2652 | 2013-01-07T13:10:10Z | 2013-01-20T01:42:54Z | null | 2014-07-05T04:31:15Z |
TST: test for #2637 | diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 86b183c7bfc76..772c35d1a11a4 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1378,6 +1378,14 @@ def test_select(self):
expected = df[df.A > 0].reindex(columns = ['C','D'])
tm.assert_frame_equal(expected, result)
+ # with a Timestamp data column
+ df = DataFrame(dict(ts=bdate_range('2012-01-01', periods=300), A=np.random.randn(300)))
+ self.store.remove('df')
+ self.store.append('df', df, data_columns=['ts', 'A'])
+ result = self.store.select('df', [Term('ts', '>=', Timestamp('2012-02-01').value)])
+ expected = df[df.ts >= Timestamp('2012-02-01')]
+ tm.assert_frame_equal(expected, result)
+
def test_panel_select(self):
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
| Added test for BUG #2637
| https://api.github.com/repos/pandas-dev/pandas/pulls/2638 | 2013-01-04T19:43:23Z | 2013-01-04T20:03:04Z | null | 2014-07-14T11:59:28Z |
added a compression argument to to_csv to be sent to _get_handle | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1eaa009275808..767f85d95ede5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1343,7 +1343,7 @@ def _helper_csv(self, writer, na_rep=None, cols=None,
def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
cols=None, header=True, index=True, index_label=None,
mode='w', nanRep=None, encoding=None, quoting=None,
- line_terminator='\n'):
+ line_terminator='\n', compression=None):
"""
Write DataFrame to a comma-separated values (csv) file
@@ -1380,6 +1380,10 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
file
quoting : optional constant from csv module
defaults to csv.QUOTE_MINIMAL
+ compression : string, optional
+ a string representing the compression to use in the output file,
+ currently only supports gzip and bz2 (see core.common._get_handle),
+ only used when the first argument is a filename
"""
if nanRep is not None: # pragma: no cover
import warnings
@@ -1391,7 +1395,7 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
f = path_or_buf
close = False
else:
- f = com._get_handle(path_or_buf, mode, encoding=encoding)
+ f = com._get_handle(path_or_buf, mode, encoding=encoding, compression=compression)
close = True
if quoting is None:
| Writing csv files with gzip compression is very useful as it both minimized the disk size taken by large data files, and can be easily read by _R_, often faster than the same csv file without compression.
Reading compressed csv files in _R_ is done without any additional input by the user, and is also already supported in _Pandas_
| https://api.github.com/repos/pandas-dev/pandas/pulls/2636 | 2013-01-04T08:09:45Z | 2013-07-29T05:15:16Z | null | 2015-10-12T15:50:29Z |
Try to fix BUG #2599 | diff --git a/pandas/src/parser.pyx b/pandas/src/parser.pyx
index 8ed4f0822b38c..331c9680cf114 100644
--- a/pandas/src/parser.pyx
+++ b/pandas/src/parser.pyx
@@ -1649,20 +1649,31 @@ def _concatenate_chunks(list chunks):
#----------------------------------------------------------------------
# NA values
-
-na_values = {
- np.float64 : np.nan,
- np.int64 : INT64_MIN,
- np.int32 : INT32_MIN,
- np.int16 : INT16_MIN,
- np.int8 : INT8_MIN,
- np.uint64 : UINT64_MAX,
- np.uint32 : UINT32_MAX,
- np.uint16 : UINT16_MAX,
- np.uint8 : UINT8_MAX,
- np.bool_ : UINT8_MAX,
- np.object_ : np.nan # oof
-}
+def _compute_na_values():
+ int64info = np.iinfo(np.int64)
+ int32info = np.iinfo(np.int32)
+ int16info = np.iinfo(np.int16)
+ int8info = np.iinfo(np.int8)
+ uint64info = np.iinfo(np.uint64)
+ uint32info = np.iinfo(np.uint32)
+ uint16info = np.iinfo(np.uint16)
+ uint8info = np.iinfo(np.uint8)
+ na_values = {
+ np.float64 : np.nan,
+ np.int64 : int64info.min,
+ np.int32 : int32info.min,
+ np.int16 : int16info.min,
+ np.int8 : int8info.min,
+ np.uint64 : uint64info.max,
+ np.uint32 : uint32info.max,
+ np.uint16 : uint16info.max,
+ np.uint8 : uint8info.max,
+ np.bool_ : uint8info.max,
+ np.object_ : np.nan # oof
+ }
+ return na_values
+
+na_values = _compute_na_values()
for k in list(na_values):
na_values[np.dtype(k)] = na_values[k]
| Not sure if this is a good way to go about it, but it seems to work.
Might be good to get rid of all the INT64_MAX/MIN stuff in parser.pyx and replace with np.iinfo(np.int64).max/min instead.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2635 | 2013-01-04T03:26:59Z | 2013-01-05T21:21:44Z | null | 2014-06-27T22:59:58Z |
Autopep8 | diff --git a/bench/bench_dense_to_sparse.py b/bench/bench_dense_to_sparse.py
index 349d3b31e965f..f76daab5d8289 100644
--- a/bench/bench_dense_to_sparse.py
+++ b/bench/bench_dense_to_sparse.py
@@ -12,4 +12,3 @@
this_rng = rng2[:-i]
data[100:] = np.nan
series[i] = SparseSeries(data, index=this_rng)
-
diff --git a/bench/bench_get_put_value.py b/bench/bench_get_put_value.py
index 5aa984d39069a..419e8f603e5ae 100644
--- a/bench/bench_get_put_value.py
+++ b/bench/bench_get_put_value.py
@@ -4,32 +4,38 @@
N = 1000
K = 50
+
def _random_index(howmany):
return Index([rands(10) for _ in xrange(howmany)])
df = DataFrame(np.random.randn(N, K), index=_random_index(N),
columns=_random_index(K))
+
def get1():
for col in df.columns:
for row in df.index:
_ = df[col][row]
+
def get2():
for col in df.columns:
for row in df.index:
_ = df.get_value(row, col)
+
def put1():
for col in df.columns:
for row in df.index:
df[col][row] = 0
+
def put2():
for col in df.columns:
for row in df.index:
df.set_value(row, col, 0)
+
def resize1():
buf = DataFrame()
for col in df.columns:
@@ -37,6 +43,7 @@ def resize1():
buf = buf.set_value(row, col, 5.)
return buf
+
def resize2():
from collections import defaultdict
diff --git a/bench/bench_groupby.py b/bench/bench_groupby.py
index 78e2c51abcbc1..807d3449e1fcb 100644
--- a/bench/bench_groupby.py
+++ b/bench/bench_groupby.py
@@ -12,16 +12,19 @@
random.shuffle(foo)
random.shuffle(foo2)
-df = DataFrame({'A' : foo,
- 'B' : foo2,
- 'C' : np.random.randn(n * k)})
+df = DataFrame({'A': foo,
+ 'B': foo2,
+ 'C': np.random.randn(n * k)})
import pandas._sandbox as sbx
+
def f():
table = sbx.StringHashTable(len(df))
ret = table.factorize(df['A'])
return ret
+
+
def g():
table = sbx.PyObjectHashTable(len(df))
ret = table.factorize(df['A'])
diff --git a/bench/bench_join_panel.py b/bench/bench_join_panel.py
index 59a4711c4b6d2..0e484fb496036 100644
--- a/bench/bench_join_panel.py
+++ b/bench/bench_join_panel.py
@@ -1,8 +1,9 @@
-# reasonably effecient
+# reasonably efficient
+
def create_panels_append(cls, panels):
""" return an append list of panels """
- panels = [ a for a in panels if a is not None ]
+ panels = [a for a in panels if a is not None]
# corner cases
if len(panels) == 0:
return None
@@ -10,18 +11,22 @@ def create_panels_append(cls, panels):
return panels[0]
elif len(panels) == 2 and panels[0] == panels[1]:
return panels[0]
- #import pdb; pdb.set_trace()
+ # import pdb; pdb.set_trace()
# create a joint index for the axis
+
def joint_index_for_axis(panels, axis):
s = set()
for p in panels:
- s.update(list(getattr(p,axis)))
+ s.update(list(getattr(p, axis)))
return sorted(list(s))
+
def reindex_on_axis(panels, axis, axis_reindex):
new_axis = joint_index_for_axis(panels, axis)
- new_panels = [ p.reindex(**{ axis_reindex : new_axis, 'copy' : False}) for p in panels ]
+ new_panels = [p.reindex(**{axis_reindex: new_axis,
+ 'copy': False}) for p in panels]
return new_panels, new_axis
- # create the joint major index, dont' reindex the sub-panels - we are appending
+ # create the joint major index, dont' reindex the sub-panels - we are
+ # appending
major = joint_index_for_axis(panels, 'major_axis')
# reindex on minor axis
panels, minor = reindex_on_axis(panels, 'minor_axis', 'minor')
@@ -29,21 +34,22 @@ def reindex_on_axis(panels, axis, axis_reindex):
panels, items = reindex_on_axis(panels, 'items', 'items')
# concatenate values
try:
- values = np.concatenate([ p.values for p in panels ],axis=1)
+ values = np.concatenate([p.values for p in panels], axis=1)
except (Exception), detail:
- raise Exception("cannot append values that dont' match dimensions! -> [%s] %s" % (','.join([ "%s" % p for p in panels ]),str(detail)))
- #pm('append - create_panel')
- p = Panel(values, items = items, major_axis = major, minor_axis = minor )
- #pm('append - done')
+ raise Exception("cannot append values that dont' match dimensions! -> [%s] %s"
+ % (','.join(["%s" % p for p in panels]), str(detail)))
+ # pm('append - create_panel')
+ p = Panel(values, items=items, major_axis=major,
+ minor_axis=minor)
+ # pm('append - done')
return p
-
-# does the job but inefficient (better to handle like you read a table in pytables...e.g create a LongPanel then convert to Wide)
-
+# does the job but inefficient (better to handle like you read a table in
+# pytables...e.g create a LongPanel then convert to Wide)
def create_panels_join(cls, panels):
""" given an array of panels's, create a single panel """
- panels = [ a for a in panels if a is not None ]
+ panels = [a for a in panels if a is not None]
# corner cases
if len(panels) == 0:
return None
@@ -62,7 +68,7 @@ def create_panels_join(cls, panels):
for minor_i, minor_index in panel.minor_axis.indexMap.items():
for major_i, major_index in panel.major_axis.indexMap.items():
try:
- d[(minor_i,major_i,item)] = values[item_index,major_index,minor_index]
+ d[(minor_i, major_i, item)] = values[item_index, major_index, minor_index]
except:
pass
# stack the values
@@ -70,8 +76,10 @@ def create_panels_join(cls, panels):
major = sorted(list(major))
items = sorted(list(items))
# create the 3d stack (items x columns x indicies)
- data = np.dstack([ np.asarray([ np.asarray([ d.get((minor_i,major_i,item),np.nan) for item in items ]) for major_i in major ]).transpose() for minor_i in minor ])
+ data = np.dstack([np.asarray([np.asarray([d.get((minor_i, major_i, item), np.nan)
+ for item in items])
+ for major_i in major]).transpose()
+ for minor_i in minor])
# construct the panel
return Panel(data, items, major, minor)
add_class_method(Panel, create_panels_join, 'join_many')
-
diff --git a/bench/bench_khash_dict.py b/bench/bench_khash_dict.py
index 1d803bec8eab2..fce3288e3294d 100644
--- a/bench/bench_khash_dict.py
+++ b/bench/bench_khash_dict.py
@@ -16,12 +16,15 @@
pid = os.getpid()
proc = psutil.Process(pid)
+
def object_test_data(n):
pass
+
def string_test_data(n):
return np.array([rands(10) for _ in xrange(n)], dtype='O')
+
def int_test_data(n):
return np.arange(n, dtype='i8')
@@ -30,17 +33,21 @@ def int_test_data(n):
#----------------------------------------------------------------------
# Benchmark 1: map_locations
+
def map_locations_python_object():
arr = string_test_data(N)
return _timeit(lambda: lib.map_indices_object(arr))
+
def map_locations_khash_object():
arr = string_test_data(N)
+
def f():
table = sbx.PyObjectHashTable(len(arr))
table.map_locations(arr)
return _timeit(f)
+
def _timeit(f, iterations=10):
start = time.time()
for _ in xrange(iterations):
@@ -51,10 +58,12 @@ def _timeit(f, iterations=10):
#----------------------------------------------------------------------
# Benchmark 2: lookup_locations
+
def lookup_python(values):
table = lib.map_indices_object(values)
return _timeit(lambda: lib.merge_indexer_object(values, table))
+
def lookup_khash(values):
table = sbx.PyObjectHashTable(len(values))
table.map_locations(values)
@@ -62,6 +71,7 @@ def lookup_khash(values):
# elapsed = _timeit(lambda: table.lookup_locations2(values))
return table
+
def leak(values):
for _ in xrange(100):
print proc.get_memory_info()
@@ -75,4 +85,3 @@ def leak(values):
#----------------------------------------------------------------------
# Benchmark 4: factorize
-
diff --git a/bench/bench_merge.py b/bench/bench_merge.py
index c9abd2c37c462..11f8c29a2897b 100644
--- a/bench/bench_merge.py
+++ b/bench/bench_merge.py
@@ -5,6 +5,7 @@
N = 10000
ngroups = 10
+
def get_test_data(ngroups=100, n=N):
unique_groups = range(ngroups)
arr = np.asarray(np.tile(unique_groups, n / ngroups), dtype=object)
@@ -38,10 +39,10 @@ def get_test_data(ngroups=100, n=N):
key = np.tile(indices[:8000], 10)
key2 = np.tile(indices2[:8000], 10)
-left = DataFrame({'key' : key, 'key2':key2,
- 'value' : np.random.randn(80000)})
-right = DataFrame({'key': indices[2000:], 'key2':indices2[2000:],
- 'value2' : np.random.randn(8000)})
+left = DataFrame({'key': key, 'key2': key2,
+ 'value': np.random.randn(80000)})
+right = DataFrame({'key': indices[2000:], 'key2': indices2[2000:],
+ 'value2': np.random.randn(8000)})
right2 = right.append(right, ignore_index=True)
@@ -78,7 +79,8 @@ def get_test_data(ngroups=100, n=N):
all_results = all_results.div(all_results['pandas'], axis=0)
-all_results = all_results.ix[:, ['pandas', 'data.table', 'plyr', 'base::merge']]
+all_results = all_results.ix[:, ['pandas', 'data.table', 'plyr',
+ 'base::merge']]
sort_results = DataFrame.from_items([('pandas', results['sort']),
('R', r_results['base::merge'])])
@@ -102,4 +104,5 @@ def get_test_data(ngroups=100, n=N):
all_results = presults.join(r_results)
all_results = all_results.div(all_results['pandas'], axis=0)
-all_results = all_results.ix[:, ['pandas', 'data.table', 'plyr', 'base::merge']]
+all_results = all_results.ix[:, ['pandas', 'data.table', 'plyr',
+ 'base::merge']]
diff --git a/bench/bench_merge_sqlite.py b/bench/bench_merge_sqlite.py
index a05a7c896b3d2..d13b296698b97 100644
--- a/bench/bench_merge_sqlite.py
+++ b/bench/bench_merge_sqlite.py
@@ -13,10 +13,10 @@
key = np.tile(indices[:8000], 10)
key2 = np.tile(indices2[:8000], 10)
-left = DataFrame({'key' : key, 'key2':key2,
- 'value' : np.random.randn(80000)})
-right = DataFrame({'key': indices[2000:], 'key2':indices2[2000:],
- 'value2' : np.random.randn(8000)})
+left = DataFrame({'key': key, 'key2': key2,
+ 'value': np.random.randn(80000)})
+right = DataFrame({'key': indices[2000:], 'key2': indices2[2000:],
+ 'value2': np.random.randn(8000)})
# right2 = right.append(right, ignore_index=True)
# right = right2
@@ -30,8 +30,10 @@
create_sql_indexes = True
conn = sqlite3.connect(':memory:')
-conn.execute('create table left( key varchar(10), key2 varchar(10), value int);')
-conn.execute('create table right( key varchar(10), key2 varchar(10), value2 int);')
+conn.execute(
+ 'create table left( key varchar(10), key2 varchar(10), value int);')
+conn.execute(
+ 'create table right( key varchar(10), key2 varchar(10), value2 int);')
conn.executemany('insert into left values (?, ?, ?)',
zip(key, key2, left['value']))
conn.executemany('insert into right values (?, ?, ?)',
@@ -43,7 +45,7 @@
conn.execute('create index right_ix on right(key, key2)')
-join_methods = ['inner', 'left outer', 'left'] # others not supported
+join_methods = ['inner', 'left outer', 'left'] # others not supported
sql_results = DataFrame(index=join_methods, columns=[False])
niter = 5
for sort in [False]:
@@ -61,8 +63,8 @@
if sort:
sql = '%s order by key, key2' % sql
- f = lambda: list(conn.execute(sql)) # list fetches results
- g = lambda: conn.execute(sql) # list fetches results
+ f = lambda: list(conn.execute(sql)) # list fetches results
+ g = lambda: conn.execute(sql) # list fetches results
gc.disable()
start = time.time()
# for _ in xrange(niter):
@@ -74,7 +76,7 @@
conn.commit()
sql_results[sort][join_method] = elapsed
- sql_results.columns = ['sqlite3'] # ['dont_sort', 'sort']
+ sql_results.columns = ['sqlite3'] # ['dont_sort', 'sort']
sql_results.index = ['inner', 'outer', 'left']
sql = """select *
diff --git a/bench/bench_sparse.py b/bench/bench_sparse.py
index 4003415267b0c..600b3d05c5f78 100644
--- a/bench/bench_sparse.py
+++ b/bench/bench_sparse.py
@@ -11,13 +11,13 @@
arr1 = np.arange(N)
index = Index(np.arange(N))
-off = N//10
-arr1[off : 2 * off] = np.NaN
-arr1[4*off: 5 * off] = np.NaN
-arr1[8*off: 9 * off] = np.NaN
+off = N // 10
+arr1[off: 2 * off] = np.NaN
+arr1[4 * off: 5 * off] = np.NaN
+arr1[8 * off: 9 * off] = np.NaN
arr2 = np.arange(N)
-arr2[3 * off // 2: 2 * off + off // 2] = np.NaN
+arr2[3 * off // 2: 2 * off + off // 2] = np.NaN
arr2[8 * off + off // 2: 9 * off + off // 2] = np.NaN
s1 = SparseSeries(arr1, index=index)
@@ -38,6 +38,7 @@
sdf = dm.to_sparse()
+
def new_data_like(sdf):
new_data = {}
for col, series in sdf.iteritems():
@@ -52,22 +53,22 @@ def new_data_like(sdf):
# for col, ser in dm.iteritems():
# data[col] = SparseSeries(ser)
-dwp = Panel.fromDict({'foo' : dm})
+dwp = Panel.fromDict({'foo': dm})
# sdf = SparseDataFrame(data)
lp = stack_sparse_frame(sdf)
-swp = SparsePanel({'A' : sdf})
-swp = SparsePanel({'A' : sdf,
- 'B' : sdf,
- 'C' : sdf,
- 'D' : sdf})
+swp = SparsePanel({'A': sdf})
+swp = SparsePanel({'A': sdf,
+ 'B': sdf,
+ 'C': sdf,
+ 'D': sdf})
y = sdf
-x = SparsePanel({'x1' : sdf + new_data_like(sdf) / 10,
- 'x2' : sdf + new_data_like(sdf) / 10})
+x = SparsePanel({'x1': sdf + new_data_like(sdf) / 10,
+ 'x2': sdf + new_data_like(sdf) / 10})
dense_y = sdf
dense_x = x.to_dense()
@@ -89,4 +90,3 @@ def new_data_like(sdf):
reload(face)
# model = face.ols(y=y, x=x)
-
diff --git a/bench/bench_take_indexing.py b/bench/bench_take_indexing.py
index fc8a3c6b743ea..3ddd647a35bf6 100644
--- a/bench/bench_take_indexing.py
+++ b/bench/bench_take_indexing.py
@@ -29,8 +29,9 @@
n = 1000
+
def _timeit(stmt, size, k=5, iters=1000):
- timer = timeit.Timer(stmt=stmt, setup=setup % (sz, k))
+ timer = timeit.Timer(stmt=stmt, setup=setup % (sz, k))
return timer.timeit(n) / n
for sz, its in zip(sizes, iters):
@@ -39,9 +40,9 @@ def _timeit(stmt, size, k=5, iters=1000):
take_2d.append(_timeit('arr.take(indexer, axis=0)', sz, iters=its))
cython_2d.append(_timeit('lib.take_axis0(arr, indexer)', sz, iters=its))
-df = DataFrame({'fancy' : fancy_2d,
- 'take' : take_2d,
- 'cython' : cython_2d})
+df = DataFrame({'fancy': fancy_2d,
+ 'take': take_2d,
+ 'cython': cython_2d})
print df
diff --git a/bench/bench_unique.py b/bench/bench_unique.py
index 3b5ece66deae6..392d3b326bf09 100644
--- a/bench/bench_unique.py
+++ b/bench/bench_unique.py
@@ -14,8 +14,10 @@
labels2 = np.tile(groups2, N // K)
data = np.random.randn(N)
+
def timeit(f, niter):
- import gc, time
+ import gc
+ import time
gc.disable()
start = time.time()
for _ in xrange(niter):
@@ -24,12 +26,14 @@ def timeit(f, niter):
gc.enable()
return elapsed
+
def algo1():
unique_labels = np.unique(labels)
result = np.empty(len(unique_labels))
for i, label in enumerate(unique_labels):
result[i] = data[labels == label].sum()
+
def algo2():
unique_labels = np.unique(labels)
indices = lib.groupby_indices(labels)
@@ -38,6 +42,7 @@ def algo2():
for i, label in enumerate(unique_labels):
result[i] = data.take(indices[label]).sum()
+
def algo3_nosort():
rizer = lib.DictFactorizer()
labs, counts = rizer.factorize(labels, sort=False)
@@ -45,6 +50,7 @@ def algo3_nosort():
out = np.empty(k)
lib.group_add(out, counts, data, labs)
+
def algo3_sort():
rizer = lib.DictFactorizer()
labs, counts = rizer.factorize(labels, sort=True)
@@ -67,6 +73,7 @@ def algo3_sort():
x = [int(y) for y in x]
data = np.random.uniform(0, 1, 100000)
+
def f():
from itertools import izip
# groupby sum
@@ -76,6 +83,7 @@ def f():
except KeyError:
counts[k] = v
+
def f2():
rizer = lib.DictFactorizer()
labs, counts = rizer.factorize(xarr, sort=False)
@@ -83,6 +91,7 @@ def f2():
out = np.empty(k)
lib.group_add(out, counts, data, labs)
+
def algo4():
rizer = lib.DictFactorizer()
labs1, _ = rizer.factorize(labels, sort=False)
@@ -137,6 +146,7 @@ def algo4():
pid = os.getpid()
proc = psutil.Process(pid)
+
def dict_unique(values, expected_K, sort=False, memory=False):
if memory:
gc.collect()
@@ -154,6 +164,7 @@ def dict_unique(values, expected_K, sort=False, memory=False):
assert(len(result) == expected_K)
return result
+
def khash_unique(values, expected_K, size_hint=False, sort=False,
memory=False):
if memory:
@@ -176,8 +187,9 @@ def khash_unique(values, expected_K, size_hint=False, sort=False,
result.sort()
assert(len(result) == expected_K)
+
def khash_unique_str(values, expected_K, size_hint=False, sort=False,
- memory=False):
+ memory=False):
if memory:
gc.collect()
before_mem = proc.get_memory_info().rss
@@ -198,6 +210,7 @@ def khash_unique_str(values, expected_K, size_hint=False, sort=False,
result.sort()
assert(len(result) == expected_K)
+
def khash_unique_int64(values, expected_K, size_hint=False, sort=False):
if size_hint:
rizer = lib.Int64HashTable(len(values))
@@ -211,6 +224,7 @@ def khash_unique_int64(values, expected_K, size_hint=False, sort=False):
result.sort()
assert(len(result) == expected_K)
+
def hash_bench():
numpy = []
dict_based = []
@@ -248,9 +262,9 @@ def hash_bench():
# 'dict, sort', 'numpy.unique'],
# index=Ks)
- unique_timings = DataFrame({'dict' : dict_based,
- 'khash, preallocate' : khash_hint,
- 'khash' : khash_nohint},
+ unique_timings = DataFrame({'dict': dict_based,
+ 'khash, preallocate': khash_hint,
+ 'khash': khash_nohint},
columns=['khash, preallocate', 'khash', 'dict'],
index=Ks)
@@ -260,5 +274,4 @@ def hash_bench():
plt.xlabel('Number of unique labels')
plt.ylabel('Mean execution time')
-
plt.show()
diff --git a/bench/better_unique.py b/bench/better_unique.py
index 9ff4823cd628f..982dd88e879da 100644
--- a/bench/better_unique.py
+++ b/bench/better_unique.py
@@ -42,11 +42,11 @@ def get_test_data(ngroups=100, n=tot):
for sz, n in zip(group_sizes, numbers):
# wes_timer = timeit.Timer(stmt='better_unique(arr)',
# setup=setup % sz)
- wes_timer = timeit.Timer(stmt='_tseries.fast_unique(arr)',
- setup=setup % sz)
+ wes_timer = timeit.Timer(stmt='_tseries.fast_unique(arr)',
+ setup=setup % sz)
- numpy_timer = timeit.Timer(stmt='np.unique(arr)',
- setup=setup % sz)
+ numpy_timer = timeit.Timer(stmt='np.unique(arr)',
+ setup=setup % sz)
print n
numpy_result = numpy_timer.timeit(number=n) / n
@@ -57,7 +57,8 @@ def get_test_data(ngroups=100, n=tot):
wes.append(wes_result)
numpy.append(numpy_result)
-result = DataFrame({'wes' : wes, 'numpy' : numpy}, index=group_sizes)
+result = DataFrame({'wes': wes, 'numpy': numpy}, index=group_sizes)
+
def make_plot(numpy, wes):
pass
diff --git a/bench/io_roundtrip.py b/bench/io_roundtrip.py
index 6b86d2a6bd283..a9711dbb83b8a 100644
--- a/bench/io_roundtrip.py
+++ b/bench/io_roundtrip.py
@@ -1,10 +1,12 @@
-import time, os
+import time
+import os
import numpy as np
import la
import pandas
from pandas import datetools, DateRange
+
def timeit(f, iterations):
start = time.clock()
@@ -13,6 +15,7 @@ def timeit(f, iterations):
return time.clock() - start
+
def rountrip_archive(N, K=50, iterations=10):
# Create data
arr = np.random.randn(N, K)
@@ -75,12 +78,14 @@ def rountrip_archive(N, K=50, iterations=10):
except:
pass
+
def numpy_roundtrip(filename, arr1, arr2):
np.savez(filename, arr1=arr1, arr2=arr2)
npz = np.load(filename)
arr1 = npz['arr1']
arr2 = npz['arr2']
+
def larry_roundtrip(filename, lar1, lar2):
io = la.IO(filename)
io['lar1'] = lar1
@@ -88,6 +93,7 @@ def larry_roundtrip(filename, lar1, lar2):
lar1 = io['lar1']
lar2 = io['lar2']
+
def pandas_roundtrip(filename, dma1, dma2):
# What's the best way to code this?
from pandas.io.pytables import HDFStore
@@ -97,6 +103,7 @@ def pandas_roundtrip(filename, dma1, dma2):
dma1 = store['dma1']
dma2 = store['dma2']
+
def pandas_roundtrip_pickle(filename, dma1, dma2):
dma1.save(filename)
dma1 = pandas.DataFrame.load(filename)
diff --git a/bench/serialize.py b/bench/serialize.py
index 29eecfc4cc419..63f885a4efa88 100644
--- a/bench/serialize.py
+++ b/bench/serialize.py
@@ -1,9 +1,11 @@
-import time, os
+import time
+import os
import numpy as np
import la
import pandas
+
def timeit(f, iterations):
start = time.clock()
@@ -12,6 +14,7 @@ def timeit(f, iterations):
return time.clock() - start
+
def roundtrip_archive(N, iterations=10):
# Create data
@@ -52,12 +55,14 @@ def roundtrip_archive(N, iterations=10):
print 'larry (HDF5) %7.4f seconds' % larry_time
print 'pandas (HDF5) %7.4f seconds' % pandas_time
+
def numpy_roundtrip(filename, arr1, arr2):
np.savez(filename, arr1=arr1, arr2=arr2)
npz = np.load(filename)
arr1 = npz['arr1']
arr2 = npz['arr2']
+
def larry_roundtrip(filename, lar1, lar2):
io = la.IO(filename)
io['lar1'] = lar1
@@ -65,6 +70,7 @@ def larry_roundtrip(filename, lar1, lar2):
lar1 = io['lar1']
lar2 = io['lar2']
+
def pandas_roundtrip(filename, dma1, dma2):
from pandas.io.pytables import HDFStore
store = HDFStore(filename)
@@ -73,6 +79,7 @@ def pandas_roundtrip(filename, dma1, dma2):
dma1 = store['dma1']
dma2 = store['dma2']
+
def pandas_roundtrip_pickle(filename, dma1, dma2):
dma1.save(filename)
dma1 = pandas.DataFrame.load(filename)
diff --git a/bench/test.py b/bench/test.py
index 7fdf94fd1c62d..2ac91468d7b73 100644
--- a/bench/test.py
+++ b/bench/test.py
@@ -9,6 +9,7 @@
lon = np.random.randint(0, 360, N)
data = np.random.randn(N)
+
def groupby1(lat, lon, data):
indexer = np.lexsort((lon, lat))
lat = lat.take(indexer)
@@ -25,6 +26,7 @@ def groupby1(lat, lon, data):
return dict(zip(zip(lat.take(decoder), lon.take(decoder)), result))
+
def group_mean(lat, lon, data):
indexer = np.lexsort((lon, lat))
lat = lat.take(indexer)
@@ -39,15 +41,17 @@ def group_mean(lat, lon, data):
return dict(zip(zip(lat.take(decoder), lon.take(decoder)), result))
+
def group_mean_naive(lat, lon, data):
grouped = collections.defaultdict(list)
for lt, ln, da in zip(lat, lon, data):
- grouped[(lt, ln)].append(da)
+ grouped[(lt, ln)].append(da)
averaged = dict((ltln, np.mean(da)) for ltln, da in grouped.items())
return averaged
+
def group_agg(values, bounds, f):
N = len(values)
result = np.empty(len(bounds), dtype=float)
@@ -57,7 +61,7 @@ def group_agg(values, bounds, f):
else:
right_bound = bounds[i + 1]
- result[i] = f(values[left_bound : right_bound])
+ result[i] = f(values[left_bound: right_bound])
return result
diff --git a/bench/zoo_bench.py b/bench/zoo_bench.py
index 450d659cdc655..74cb1952a5a2a 100644
--- a/bench/zoo_bench.py
+++ b/bench/zoo_bench.py
@@ -3,6 +3,8 @@
n = 1000000
# indices = Index([rands(10) for _ in xrange(n)])
+
+
def sample(values, k):
sampler = np.random.permutation(len(values))
return values.take(sampler[:k])
@@ -32,4 +34,3 @@ def sample(values, k):
# df1 = DataFrame(np.random.randn(1000000, 5), idx1, columns=range(5))
# df2 = DataFrame(np.random.randn(1000000, 5), idx2, columns=range(5, 10))
-
diff --git a/doc/make.py b/doc/make.py
index 5affd4b2414ed..adf34920b9ede 100755
--- a/doc/make.py
+++ b/doc/make.py
@@ -25,30 +25,35 @@
SPHINX_BUILD = 'sphinxbuild'
+
def upload_dev():
'push a copy to the pydata dev directory'
if os.system('cd build/html; rsync -avz . pandas@pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/dev/ -essh'):
raise SystemExit('Upload to Pydata Dev failed')
+
def upload_dev_pdf():
'push a copy to the pydata dev directory'
if os.system('cd build/latex; scp pandas.pdf pandas@pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/dev/'):
raise SystemExit('PDF upload to Pydata Dev failed')
+
def upload_stable():
'push a copy to the pydata stable directory'
if os.system('cd build/html; rsync -avz . pandas@pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/stable/ -essh'):
raise SystemExit('Upload to stable failed')
+
def upload_stable_pdf():
'push a copy to the pydata dev directory'
if os.system('cd build/latex; scp pandas.pdf pandas@pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/stable/'):
raise SystemExit('PDF upload to stable failed')
+
def upload_prev(ver, doc_root='./'):
'push a copy of older release to appropriate version directory'
local_dir = doc_root + 'build/html'
@@ -57,7 +62,8 @@ def upload_prev(ver, doc_root='./'):
cmd = cmd % (local_dir, remote_dir)
print cmd
if os.system(cmd):
- raise SystemExit('Upload to %s from %s failed' % (remote_dir, local_dir))
+ raise SystemExit(
+ 'Upload to %s from %s failed' % (remote_dir, local_dir))
local_dir = doc_root + 'build/latex'
pdf_cmd = 'cd %s; scp pandas.pdf pandas@pandas.pydata.org:%s'
@@ -65,6 +71,7 @@ def upload_prev(ver, doc_root='./'):
if os.system(pdf_cmd):
raise SystemExit('Upload PDF to %s from %s failed' % (ver, doc_root))
+
def build_prev(ver):
if os.system('git checkout v%s' % ver) != 1:
os.chdir('..')
@@ -76,6 +83,7 @@ def build_prev(ver):
os.system('python make.py latex')
os.system('git checkout master')
+
def clean():
if os.path.exists('build'):
shutil.rmtree('build')
@@ -83,12 +91,14 @@ def clean():
if os.path.exists('source/generated'):
shutil.rmtree('source/generated')
+
def html():
check_build()
if os.system('sphinx-build -P -b html -d build/doctrees '
'source build/html'):
raise SystemExit("Building HTML failed.")
+
def latex():
check_build()
if sys.platform != 'win32':
@@ -108,6 +118,7 @@ def latex():
else:
print('latex build has not been tested on windows')
+
def check_build():
build_dirs = [
'build', 'build/doctrees', 'build/html',
@@ -119,10 +130,12 @@ def check_build():
except OSError:
pass
+
def all():
# clean()
html()
+
def auto_dev_build(debug=False):
msg = ''
try:
@@ -145,6 +158,7 @@ def auto_dev_build(debug=False):
msg = str(inst) + '\n'
sendmail(step, '[ERROR] ' + msg)
+
def sendmail(step=None, err_msg=None):
from_name, to_name = _get_config()
@@ -177,6 +191,7 @@ def sendmail(step=None, err_msg=None):
finally:
server.close()
+
def _get_dir(subdir=None):
import getpass
USERNAME = getpass.getuser()
@@ -190,6 +205,7 @@ def _get_dir(subdir=None):
conf_dir = '%s/%s' % (HOME, subdir)
return conf_dir
+
def _get_credentials():
tmp_dir = _get_dir()
cred = '%s/credentials' % tmp_dir
@@ -204,6 +220,7 @@ def _get_credentials():
return server, port, login, pwd
+
def _get_config():
tmp_dir = _get_dir()
with open('%s/addresses' % tmp_dir, 'r') as fh:
@@ -211,17 +228,17 @@ def _get_config():
return from_name, to_name
funcd = {
- 'html' : html,
- 'upload_dev' : upload_dev,
- 'upload_stable' : upload_stable,
- 'upload_dev_pdf' : upload_dev_pdf,
- 'upload_stable_pdf' : upload_stable_pdf,
- 'latex' : latex,
- 'clean' : clean,
- 'auto_dev' : auto_dev_build,
- 'auto_debug' : lambda: auto_dev_build(True),
- 'all' : all,
- }
+ 'html': html,
+ 'upload_dev': upload_dev,
+ 'upload_stable': upload_stable,
+ 'upload_dev_pdf': upload_dev_pdf,
+ 'upload_stable_pdf': upload_stable_pdf,
+ 'latex': latex,
+ 'clean': clean,
+ 'auto_dev': auto_dev_build,
+ 'auto_debug': lambda: auto_dev_build(True),
+ 'all': all,
+}
small_docs = False
@@ -240,10 +257,10 @@ def _get_config():
for arg in sys.argv[1:]:
func = funcd.get(arg)
if func is None:
- raise SystemExit('Do not know how to handle %s; valid args are %s'%(
- arg, funcd.keys()))
+ raise SystemExit('Do not know how to handle %s; valid args are %s' % (
+ arg, funcd.keys()))
func()
else:
small_docs = False
all()
-#os.chdir(current_dir)
+# os.chdir(current_dir)
diff --git a/doc/plots/stats/moment_plots.py b/doc/plots/stats/moment_plots.py
index 7c8b6fb56bf38..9e3a902592c6b 100644
--- a/doc/plots/stats/moment_plots.py
+++ b/doc/plots/stats/moment_plots.py
@@ -4,11 +4,13 @@
import pandas.util.testing as t
import pandas.stats.moments as m
+
def test_series(n=1000):
t.N = n
s = t.makeTimeSeries()
return s
+
def plot_timeseries(*args, **kwds):
n = len(args)
@@ -17,13 +19,12 @@ def plot_timeseries(*args, **kwds):
titles = kwds.get('titles', None)
for k in range(1, n + 1):
- ax = axes[k-1]
- ts = args[k-1]
+ ax = axes[k - 1]
+ ts = args[k - 1]
ax.plot(ts.index, ts.values)
if titles:
- ax.set_title(titles[k-1])
+ ax.set_title(titles[k - 1])
fig.autofmt_xdate()
fig.subplots_adjust(bottom=0.10, top=0.95)
-
diff --git a/doc/plots/stats/moments_expw.py b/doc/plots/stats/moments_expw.py
index 699b6cce7ca9f..5fff419b3a940 100644
--- a/doc/plots/stats/moments_expw.py
+++ b/doc/plots/stats/moments_expw.py
@@ -19,8 +19,10 @@
ax1.plot(s.index, m.rolling_mean(s, 50, min_periods=1).values, color='r')
ax1.set_title('rolling_mean vs. ewma')
-line1 = ax2.plot(s.index, m.ewmstd(s, span=50, min_periods=1).values, color='b')
-line2 = ax2.plot(s.index, m.rolling_std(s, 50, min_periods=1).values, color='r')
+line1 = ax2.plot(
+ s.index, m.ewmstd(s, span=50, min_periods=1).values, color='b')
+line2 = ax2.plot(
+ s.index, m.rolling_std(s, 50, min_periods=1).values, color='r')
ax2.set_title('rolling_std vs. ewmstd')
fig.legend((line1, line2),
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 692c7757ee17c..76093d83b32e7 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -10,12 +10,13 @@
# All configuration values have a default; values that are commented out
# serve to show the default.
-import sys, os
+import sys
+import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
-#sys.path.append(os.path.abspath('.'))
+# sys.path.append(os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../sphinxext'))
sys.path.extend([
@@ -27,7 +28,7 @@
])
-# -- General configuration -----------------------------------------------------
+# -- General configuration -----------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sphinxext.
@@ -56,7 +57,7 @@
source_suffix = '.rst'
# The encoding of source files.
-#source_encoding = 'utf-8'
+# source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
@@ -83,43 +84,43 @@
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
-#language = None
+# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
-#today = ''
+# today = ''
# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
+# today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
-#unused_docs = []
+# unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = []
# The reST default role (used for this markup: `text`) to use for all documents.
-#default_role = None
+# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
+# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
-#add_module_names = True
+# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
-#show_authors = False
+# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
-#modindex_common_prefix = []
+# modindex_common_prefix = []
-# -- Options for HTML output ---------------------------------------------------
+# -- Options for HTML output ---------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
@@ -128,31 +129,31 @@
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
-#html_style = 'statsmodels.css'
+# html_style = 'statsmodels.css'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
-#html_theme_options = {}
+# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['themes']
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
-#html_title = None
+# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
-#html_short_title = None
+# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
-#html_logo = None
+# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
-#html_favicon = None
+# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
@@ -161,82 +162,82 @@
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
-#html_last_updated_fmt = '%b %d, %Y'
+# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
-#html_use_smartypants = True
+# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
-#html_sidebars = {}
+# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
-#html_additional_pages = {}
+# html_additional_pages = {}
# If false, no module index is generated.
html_use_modindex = True
# If false, no index is generated.
-#html_use_index = True
+# html_use_index = True
# If true, the index is split into individual pages for each letter.
-#html_split_index = False
+# html_split_index = False
# If true, links to the reST sources are added to the pages.
-#html_show_sourcelink = True
+# html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
-#html_use_opensearch = ''
+# html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
-#html_file_suffix = ''
+# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'pandas'
-# -- Options for LaTeX output --------------------------------------------------
+# -- Options for LaTeX output --------------------------------------------
# The paper size ('letter' or 'a4').
-#latex_paper_size = 'letter'
+# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
-#latex_font_size = '10pt'
+# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
- ('index', 'pandas.tex',
- u'pandas: powerful Python data analysis toolkit',
- u'Wes McKinney\n\& PyData Development Team', 'manual'),
+ ('index', 'pandas.tex',
+ u'pandas: powerful Python data analysis toolkit',
+ u'Wes McKinney\n\& PyData Development Team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
-#latex_logo = None
+# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
-#latex_use_parts = False
+# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
-#latex_preamble = ''
+# latex_preamble = ''
# Documents to append as an appendix to all manuals.
-#latex_appendices = []
+# latex_appendices = []
# If false, no module index is generated.
-#latex_use_modindex = True
+# latex_use_modindex = True
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
- 'statsmodels' : ('http://statsmodels.sourceforge.net/devel/', None),
- 'python': ('http://docs.python.org/', None)
- }
+ 'statsmodels': ('http://statsmodels.sourceforge.net/devel/', None),
+ 'python': ('http://docs.python.org/', None)
+}
import glob
autosummary_generate = glob.glob("*.rst")
@@ -244,6 +245,6 @@
extlinks = {'issue': ('https://github.com/pydata/pandas/issues/%s',
'issue '),
'pull request': ('https://github.com/pydata/pandas/pulls/%s',
- 'pull request '),
+ 'pull request '),
'wiki': ('https://github.com/pydata/pandas/pulls/%s',
- 'wiki ')}
+ 'wiki ')}
diff --git a/examples/finance.py b/examples/finance.py
index 639a80afbef8e..24aa337a84024 100644
--- a/examples/finance.py
+++ b/examples/finance.py
@@ -16,16 +16,17 @@
startDate = datetime(2008, 1, 1)
endDate = datetime(2009, 9, 1)
+
def getQuotes(symbol, start, end):
quotes = fin.quotes_historical_yahoo(symbol, start, end)
dates, open, close, high, low, volume = zip(*quotes)
data = {
- 'open' : open,
- 'close' : close,
- 'high' : high,
- 'low' : low,
- 'volume' : volume
+ 'open': open,
+ 'close': close,
+ 'high': high,
+ 'low': low,
+ 'volume': volume
}
dates = Index([datetime.fromordinal(int(d)) for d in dates])
@@ -36,10 +37,10 @@ def getQuotes(symbol, start, end):
goog = getQuotes('GOOG', startDate, endDate)
ibm = getQuotes('IBM', startDate, endDate)
-px = DataFrame({'MSFT' : msft['close'],
- 'IBM' : ibm['close'],
- 'GOOG' : goog['close'],
- 'AAPL' : aapl['close']})
+px = DataFrame({'MSFT': msft['close'],
+ 'IBM': ibm['close'],
+ 'GOOG': goog['close'],
+ 'AAPL': aapl['close']})
returns = px / px.shift(1) - 1
# Select dates
@@ -54,6 +55,7 @@ def getQuotes(symbol, start, end):
# Aggregate monthly
+
def toMonthly(frame, how):
offset = BMonthEnd()
@@ -65,8 +67,8 @@ def toMonthly(frame, how):
# Statistics
stdev = DataFrame({
- 'MSFT' : msft.std(),
- 'IBM' : ibm.std()
+ 'MSFT': msft.std(),
+ 'IBM': ibm.std()
})
# Arithmetic
diff --git a/examples/regressions.py b/examples/regressions.py
index e78ff90a22687..2d21a0ece58c3 100644
--- a/examples/regressions.py
+++ b/examples/regressions.py
@@ -11,13 +11,15 @@
start = datetime(2009, 9, 2)
dateRange = DateRange(start, periods=N)
+
def makeDataFrame():
data = DataFrame(np.random.randn(N, 7),
- columns=list(string.ascii_uppercase[:7]),
- index=dateRange)
+ columns=list(string.ascii_uppercase[:7]),
+ index=dateRange)
return data
+
def makeSeries():
return Series(np.random.randn(N), index=dateRange)
@@ -25,7 +27,7 @@ def makeSeries():
# Standard rolling linear regression
X = makeDataFrame()
-Y = makeSeries()
+Y = makeSeries()
model = ols(y=Y, x=X)
@@ -35,9 +37,9 @@ def makeSeries():
# Panel regression
data = {
- 'A' : makeDataFrame(),
- 'B' : makeDataFrame(),
- 'C' : makeDataFrame()
+ 'A': makeDataFrame(),
+ 'B': makeDataFrame(),
+ 'C': makeDataFrame()
}
Y = makeDataFrame()
diff --git a/ez_setup.py b/ez_setup.py
index 1ff1d3e7a6839..de65d3c1f0375 100644
--- a/ez_setup.py
+++ b/ez_setup.py
@@ -15,7 +15,8 @@
"""
import sys
DEFAULT_VERSION = "0.6c11"
-DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
+DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[
+ :3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
@@ -62,9 +63,13 @@
'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a',
}
-import sys, os
-try: from hashlib import md5
-except ImportError: from md5 import md5
+import sys
+import os
+try:
+ from hashlib import md5
+except ImportError:
+ from md5 import md5
+
def _validate_md5(egg_name, data):
if egg_name in md5_data:
@@ -77,6 +82,7 @@ def _validate_md5(egg_name, data):
sys.exit(2)
return data
+
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
@@ -93,23 +99,27 @@ def use_setuptools(
an attempt to abort the calling script.
"""
was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
+
def do_download():
- egg = download_setuptools(version, download_base, to_dir, download_delay)
+ egg = download_setuptools(
+ version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
- import setuptools; setuptools.bootstrap_install_from = egg
+ import setuptools
+ setuptools.bootstrap_install_from = egg
try:
import pkg_resources
except ImportError:
- return do_download()
+ return do_download()
try:
- pkg_resources.require("setuptools>="+version); return
+ pkg_resources.require("setuptools>=" + version)
+ return
except pkg_resources.VersionConflict, e:
if was_imported:
print >>sys.stderr, (
- "The required version of setuptools (>=%s) is not available, and\n"
- "can't be installed while this script is running. Please install\n"
- " a more recent version first, using 'easy_install -U setuptools'."
- "\n\n(Currently using %r)"
+ "The required version of setuptools (>=%s) is not available, and\n"
+ "can't be installed while this script is running. Please install\n"
+ " a more recent version first, using 'easy_install -U setuptools'."
+ "\n\n(Currently using %r)"
) % (version, e.args[0])
sys.exit(2)
else:
@@ -118,9 +128,10 @@ def do_download():
except pkg_resources.DistributionNotFound:
return do_download()
+
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
- delay = 15
+ delay=15
):
"""Download setuptools from a specified location and return its filename
@@ -129,8 +140,9 @@ def download_setuptools(
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
- import urllib2, shutil
- egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
+ import urllib2
+ import shutil
+ egg_name = "setuptools-%s-py%s.egg" % (version, sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
@@ -152,54 +164,25 @@ def download_setuptools(
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
- version, download_base, delay, url
- ); from time import sleep; sleep(delay)
+ version, download_base, delay, url
+ )
+ from time import sleep
+ sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
- dst = open(saveto,"wb"); dst.write(data)
+ dst = open(saveto, "wb")
+ dst.write(data)
finally:
- if src: src.close()
- if dst: dst.close()
+ if src:
+ src.close()
+ if dst:
+ dst.close()
return os.path.realpath(saveto)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
@@ -208,21 +191,21 @@ def main(argv, version=DEFAULT_VERSION):
egg = None
try:
egg = download_setuptools(version, delay=0)
- sys.path.insert(0,egg)
+ sys.path.insert(0, egg)
from setuptools.command.easy_install import main
- return main(list(argv)+[egg]) # we're done here
+ return main(list(argv) + [egg]) # we're done here
finally:
if egg and os.path.exists(egg):
os.unlink(egg)
else:
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
- "You have an obsolete version of setuptools installed. Please\n"
- "remove it from your system entirely before rerunning this script."
+ "You have an obsolete version of setuptools installed. Please\n"
+ "remove it from your system entirely before rerunning this script."
)
sys.exit(2)
- req = "setuptools>="+version
+ req = "setuptools>=" + version
import pkg_resources
try:
pkg_resources.require(req)
@@ -231,16 +214,17 @@ def main(argv, version=DEFAULT_VERSION):
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
- main(list(argv)+[download_setuptools(delay=0)])
- sys.exit(0) # try to force an exit
+ main(list(argv) + [download_setuptools(delay=0)])
+ sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
- print "Setuptools version",version,"or greater has been installed."
+ print "Setuptools version", version, "or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
+
def update_md5(filenames):
"""Update our built-in md5 registry"""
@@ -248,7 +232,7 @@ def update_md5(filenames):
for name in filenames:
base = os.path.basename(name)
- f = open(name,'rb')
+ f = open(name, 'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
@@ -258,7 +242,9 @@ def update_md5(filenames):
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
- f = open(srcfile, 'rb'); src = f.read(); f.close()
+ f = open(srcfile, 'rb')
+ src = f.read()
+ f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
@@ -266,19 +252,13 @@ def update_md5(filenames):
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
- f = open(srcfile,'w')
+ f = open(srcfile, 'w')
f.write(src)
f.close()
-if __name__=='__main__':
- if len(sys.argv)>2 and sys.argv[1]=='--md5update':
+if __name__ == '__main__':
+ if len(sys.argv) > 2 and sys.argv[1] == '--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:])
-
-
-
-
-
-
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 6c58c708b8306..5780ddfbe1fdc 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -6,7 +6,7 @@
from . import hashtable, tslib, lib
except Exception: # pragma: no cover
import sys
- e = sys.exc_info()[1] # Py25 and Py3 current exception syntax conflict
+ e = sys.exc_info()[1] # Py25 and Py3 current exception syntax conflict
print e
if 'No module named lib' in str(e):
raise ImportError('C extensions not built: if you installed already '
diff --git a/pandas/compat/scipy.py b/pandas/compat/scipy.py
index 9f021a01ebce3..aab8bd89c5af4 100644
--- a/pandas/compat/scipy.py
+++ b/pandas/compat/scipy.py
@@ -63,7 +63,7 @@ def scoreatpercentile(a, per, limit=(), interpolation_method='fraction'):
if limit:
values = values[(limit[0] <= values) & (values <= limit[1])]
- idx = per /100. * (values.shape[0] - 1)
+ idx = per / 100. * (values.shape[0] - 1)
if (idx % 1 == 0):
score = values[idx]
else:
@@ -75,7 +75,7 @@ def scoreatpercentile(a, per, limit=(), interpolation_method='fraction'):
elif interpolation_method == 'higher':
score = values[np.ceil(idx)]
else:
- raise ValueError("interpolation_method can only be 'fraction', " \
+ raise ValueError("interpolation_method can only be 'fraction', "
"'lower' or 'higher'")
return score
@@ -85,7 +85,7 @@ def _interpolate(a, b, fraction):
"""Returns the point at the given fraction between a and b, where
'fraction' must be between 0 and 1.
"""
- return a + (b - a)*fraction
+ return a + (b - a) * fraction
def rankdata(a):
@@ -121,9 +121,9 @@ def rankdata(a):
for i in xrange(n):
sumranks += i
dupcount += 1
- if i==n-1 or svec[i] != svec[i+1]:
+ if i == n - 1 or svec[i] != svec[i + 1]:
averank = sumranks / float(dupcount) + 1
- for j in xrange(i-dupcount+1,i+1):
+ for j in xrange(i - dupcount + 1, i + 1):
newarray[ivec[j]] = averank
sumranks = 0
dupcount = 0
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 7820a94fdf4b1..306f9aff8f4d3 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -27,8 +27,8 @@
from pandas.tseries.period import Period, PeriodIndex
# legacy
-from pandas.core.daterange import DateRange # deprecated
+from pandas.core.daterange import DateRange # deprecated
import pandas.core.datetools as datetools
-from pandas.core.config import get_option,set_option,reset_option,\
- describe_option, options
+from pandas.core.config import get_option, set_option, reset_option,\
+ describe_option, options
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 869cc513d0aaf..6fc490e133905 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -53,6 +53,7 @@ def isnull(obj):
'''
return _isnull(obj)
+
def _isnull_new(obj):
if lib.isscalar(obj):
return lib.checknull(obj)
@@ -68,6 +69,7 @@ def _isnull_new(obj):
else:
return obj is None
+
def _isnull_old(obj):
'''
Detect missing values. Treat None, NaN, INF, -INF as null.
@@ -96,6 +98,7 @@ def _isnull_old(obj):
_isnull = _isnull_new
+
def _use_inf_as_null(key):
'''Option change callback for null/inf behaviour
Choose which replacement for numpy.isnan / -numpy.isfinite is used.
@@ -116,13 +119,12 @@ def _use_inf_as_null(key):
programmatically-creating-variables-in-python/4859312#4859312
'''
flag = get_option(key)
- if flag == True:
+ if flag:
globals()['_isnull'] = _isnull_old
else:
globals()['_isnull'] = _isnull_new
-
def _isnull_ndarraylike(obj):
from pandas import Series
values = np.asarray(obj)
@@ -176,6 +178,7 @@ def _isnull_ndarraylike_old(obj):
result = -np.isfinite(obj)
return result
+
def notnull(obj):
'''
Replacement for numpy.isfinite / -numpy.isnan which is suitable
@@ -278,7 +281,7 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
'object': algos.take_2d_axis1_object,
'bool': _view_wrapper(algos.take_2d_axis1_bool, np.uint8),
'datetime64[ns]': _view_wrapper(algos.take_2d_axis1_int64, np.int64,
- na_override=tslib.iNaT),
+ na_override=tslib.iNaT),
}
_take2d_multi_dict = {
@@ -458,6 +461,7 @@ def mask_out_axis(arr, mask, axis, fill_value=np.nan):
'int32': algos.diff_2d_int32
}
+
def diff(arr, n, axis=0):
n = int(n)
dtype = arr.dtype
@@ -628,6 +632,7 @@ def _consensus_name_attr(objs):
#----------------------------------------------------------------------
# Lots of little utilities
+
def _possibly_cast_to_datetime(value, dtype):
""" try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
@@ -648,9 +653,10 @@ def _possibly_cast_to_datetime(value, dtype):
value = tslib.array_to_datetime(value)
except:
pass
-
+
return value
+
def _infer_dtype(value):
if isinstance(value, (float, np.floating)):
return np.float_
@@ -671,6 +677,7 @@ def _possibly_cast_item(obj, item, dtype):
elif not issubclass(dtype, (np.integer, np.bool_)): # pragma: no cover
raise ValueError("Unexpected dtype encountered: %s" % dtype)
+
def _is_bool_indexer(key):
if isinstance(key, np.ndarray) and key.dtype == np.object_:
key = np.asarray(key)
@@ -691,6 +698,7 @@ def _is_bool_indexer(key):
return False
+
def _default_index(n):
from pandas.core.index import Int64Index
values = np.arange(n, dtype=np.int64)
@@ -805,24 +813,26 @@ def iterpairs(seq):
return itertools.izip(seq_it, seq_it_next)
+
def split_ranges(mask):
""" Generates tuples of ranges which cover all True value in mask
>>> list(split_ranges([1,0,0,1,0]))
[(0, 1), (3, 4)]
"""
- ranges = [(0,len(mask))]
+ ranges = [(0, len(mask))]
- for pos,val in enumerate(mask):
- if not val: # this pos should be ommited, split off the prefix range
+ for pos, val in enumerate(mask):
+ if not val: # this pos should be ommited, split off the prefix range
r = ranges.pop()
- if pos > r[0]: # yield non-zero range
- yield (r[0],pos)
- if pos+1 < len(mask): # save the rest for processing
- ranges.append((pos+1,len(mask)))
+ if pos > r[0]: # yield non-zero range
+ yield (r[0], pos)
+ if pos + 1 < len(mask): # save the rest for processing
+ ranges.append((pos + 1, len(mask)))
if ranges:
yield ranges[-1]
+
def indent(string, spaces=4):
dent = ' ' * spaces
return '\n'.join([dent + x for x in string.split('\n')])
@@ -972,6 +982,7 @@ def is_integer_dtype(arr_or_dtype):
(issubclass(tipo, np.datetime64) or
issubclass(tipo, np.timedelta64)))
+
def _is_int_or_datetime_dtype(arr_or_dtype):
# also timedelta64
if isinstance(arr_or_dtype, np.dtype):
@@ -980,6 +991,7 @@ def _is_int_or_datetime_dtype(arr_or_dtype):
tipo = arr_or_dtype.dtype.type
return issubclass(tipo, np.integer)
+
def is_datetime64_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype.type
@@ -1022,7 +1034,7 @@ def _astype_nansafe(arr, dtype):
if dtype == object:
return tslib.ints_to_pydatetime(arr.view(np.int64))
elif (np.issubdtype(arr.dtype, np.floating) and
- np.issubdtype(dtype, np.integer)):
+ np.issubdtype(dtype, np.integer)):
if np.isnan(arr).any():
raise ValueError('Cannot convert NA to integer')
@@ -1091,8 +1103,6 @@ def load(path):
f.close()
-
-
class UTF8Recoder:
"""
Iterator that reads an encoded stream and reencodes the input to UTF-8
@@ -1214,6 +1224,7 @@ def _concat_compat(to_concat, axis=0):
else:
return np.concatenate(to_concat, axis=axis)
+
def in_interactive_session():
""" check if we're running in an interactive shell
@@ -1229,6 +1240,7 @@ def check_main():
except:
return check_main()
+
def in_qtconsole():
"""
check if we're inside an IPython qtconsole
@@ -1277,6 +1289,7 @@ def _pprint_seq(seq, _nest_lvl=0, **kwds):
fmt = u"[%s]" if hasattr(seq, '__setitem__') else u"(%s)"
return fmt % ", ".join(pprint_thing(e, _nest_lvl + 1, **kwds) for e in seq)
+
def _pprint_dict(seq, _nest_lvl=0):
"""
internal. pprinter for iterables. you should probably use pprint_thing()
@@ -1314,14 +1327,14 @@ def pprint_thing(thing, _nest_lvl=0, escape_chars=None):
if thing is None:
result = ''
- elif (py3compat.PY3 and hasattr(thing,'__next__')) or \
- hasattr(thing,'next'):
+ elif (py3compat.PY3 and hasattr(thing, '__next__')) or \
+ hasattr(thing, 'next'):
return unicode(thing)
elif (isinstance(thing, dict) and
_nest_lvl < get_option("display.pprint_nest_depth")):
result = _pprint_dict(thing, _nest_lvl)
elif _is_sequence(thing) and _nest_lvl < \
- get_option("display.pprint_nest_depth"):
+ get_option("display.pprint_nest_depth"):
result = _pprint_seq(thing, _nest_lvl, escape_chars=escape_chars)
else:
# when used internally in the package, everything
@@ -1344,14 +1357,14 @@ def pprint_thing(thing, _nest_lvl=0, escape_chars=None):
}
escape_chars = escape_chars or tuple()
for c in escape_chars:
- result=result.replace(c,translate[c])
+ result = result.replace(c, translate[c])
return unicode(result) # always unicode
def pprint_thing_encoded(object, encoding='utf-8', errors='replace', **kwds):
value = pprint_thing(object) # get unicode representation of object
- return value.encode(encoding, errors,**kwds)
+ return value.encode(encoding, errors, **kwds)
def console_encode(object, **kwds):
@@ -1363,4 +1376,4 @@ def console_encode(object, **kwds):
where you output to the console.
"""
return pprint_thing_encoded(object,
- get_option("display.encoding"))
+ get_option("display.encoding"))
diff --git a/pandas/core/config.py b/pandas/core/config.py
index 2d5652690388b..aca01adb3c58d 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -56,7 +56,8 @@
import warnings
DeprecatedOption = namedtuple('DeprecatedOption', 'key msg rkey removal_ver')
-RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator cb')
+RegisteredOption = namedtuple(
+ 'RegisteredOption', 'key defval doc validator cb')
_deprecated_options = {} # holds deprecated option metdata
_registered_options = {} # holds registered option metdata
@@ -84,8 +85,9 @@ def _get_single_key(pat, silent):
return key
+
def _get_option(pat, silent=False):
- key = _get_single_key(pat,silent)
+ key = _get_single_key(pat, silent)
# walk the nested dict
root, k = _get_root(key)
@@ -93,7 +95,7 @@ def _get_option(pat, silent=False):
def _set_option(pat, value, silent=False):
- key = _get_single_key(pat,silent)
+ key = _get_single_key(pat, silent)
o = _get_registered_option(key)
if o and o.validator:
@@ -138,33 +140,34 @@ def _reset_option(pat):
for k in keys:
_set_option(k, _registered_options[k].defval)
+
class DictWrapper(object):
""" provide attribute-style access to a nested dict
"""
- def __init__(self,d,prefix=""):
- object.__setattr__(self,"d",d)
- object.__setattr__(self,"prefix",prefix)
+ def __init__(self, d, prefix=""):
+ object.__setattr__(self, "d", d)
+ object.__setattr__(self, "prefix", prefix)
- def __setattr__(self,key,val):
- prefix = object.__getattribute__(self,"prefix")
+ def __setattr__(self, key, val):
+ prefix = object.__getattribute__(self, "prefix")
if prefix:
prefix += "."
prefix += key
# you can't set new keys
# can you can't overwrite subtrees
- if key in self.d and not isinstance(self.d[key],dict):
- _set_option(prefix,val)
+ if key in self.d and not isinstance(self.d[key], dict):
+ _set_option(prefix, val)
else:
raise KeyError("You can only set the value of existing options")
- def __getattr__(self,key):
- prefix = object.__getattribute__(self,"prefix")
+ def __getattr__(self, key):
+ prefix = object.__getattribute__(self, "prefix")
if prefix:
prefix += "."
prefix += key
- v=object.__getattribute__(self,"d")[key]
- if isinstance(v,dict):
- return DictWrapper(v,prefix)
+ v = object.__getattribute__(self, "d")[key]
+ if isinstance(v, dict):
+ return DictWrapper(v, prefix)
else:
return _get_option(prefix)
@@ -179,6 +182,7 @@ def __dir__(self):
# using the py2.6+ advanced formatting syntax to plug in a concise list
# of options, and option descriptions.
+
class CallableDyanmicDoc(object):
def __init__(self, func, doc_tmpl):
@@ -193,7 +197,7 @@ def __doc__(self):
opts_desc = _describe_option('all', _print_desc=False)
opts_list = pp_options_list(_registered_options.keys())
return self.__doc_tmpl__.format(opts_desc=opts_desc,
- opts_list=opts_list)
+ opts_list=opts_list)
_get_option_tmpl = """"get_option(pat) - Retrieves the value of the specified option
@@ -219,7 +223,7 @@ def __doc__(self):
{opts_desc}
"""
-_set_option_tmpl="""set_option(pat,value) - Sets the value of the specified option
+_set_option_tmpl = """set_option(pat,value) - Sets the value of the specified option
Available options:
{opts_list}
@@ -245,7 +249,7 @@ def __doc__(self):
{opts_desc}
"""
-_describe_option_tmpl="""describe_option(pat,_print_desc=False) Prints the description
+_describe_option_tmpl = """describe_option(pat,_print_desc=False) Prints the description
for one or more registered options.
Call with not arguments to get a listing for all registered options.
@@ -270,7 +274,7 @@ def __doc__(self):
{opts_desc}
"""
-_reset_option_tmpl="""reset_option(pat) - Reset one or more options to their default value.
+_reset_option_tmpl = """reset_option(pat) - Reset one or more options to their default value.
Pass "all" as argument to reset all options.
@@ -303,19 +307,20 @@ def __doc__(self):
######################################################
# Functions for use by pandas developers, in addition to User - api
+
class option_context(object):
- def __init__(self,*args):
- assert len(args) % 2 == 0 and len(args)>=2, \
- "Need to invoke as option_context(pat,val,[(pat,val),..))."
- ops = zip(args[::2],args[1::2])
- undo=[]
- for pat,val in ops:
- undo.append((pat,_get_option(pat,silent=True)))
+ def __init__(self, *args):
+ assert len(args) % 2 == 0 and len(args) >= 2, \
+ "Need to invoke as option_context(pat,val,[(pat,val),..))."
+ ops = zip(args[::2], args[1::2])
+ undo = []
+ for pat, val in ops:
+ undo.append((pat, _get_option(pat, silent=True)))
self.undo = undo
- for pat,val in ops:
- _set_option(pat,val,silent=True)
+ for pat, val in ops:
+ _set_option(pat, val, silent=True)
def __enter__(self):
pass
@@ -325,6 +330,7 @@ def __exit__(self, *args):
for pat, val in self.undo:
_set_option(pat, val)
+
def register_option(key, defval, doc='', validator=None, cb=None):
"""Register an option in the package-wide pandas config object
@@ -348,7 +354,8 @@ def register_option(key, defval, doc='', validator=None, cb=None):
ValueError if `validator` is specified and `defval` is not a valid value.
"""
- import tokenize, keyword
+ import tokenize
+ import keyword
key = key.lower()
if key in _registered_options:
@@ -364,7 +371,7 @@ def register_option(key, defval, doc='', validator=None, cb=None):
path = key.split('.')
for k in path:
- if not bool(re.match('^'+tokenize.Name+'$', k)):
+ if not bool(re.match('^' + tokenize.Name + '$', k)):
raise ValueError("%s is not a valid identifier" % k)
if keyword.iskeyword(key):
raise ValueError("%s is a python keyword" % k)
@@ -374,7 +381,7 @@ def register_option(key, defval, doc='', validator=None, cb=None):
if not isinstance(cursor, dict):
raise KeyError("Path prefix to option '%s' is already an option"
% '.'.join(path[:i]))
- if not cursor.has_key(p):
+ if p not in cursor:
cursor[p] = {}
cursor = cursor[p]
@@ -382,12 +389,11 @@ def register_option(key, defval, doc='', validator=None, cb=None):
raise KeyError("Path prefix to option '%s' is already an option"
% '.'.join(path[:-1]))
-
cursor[path[-1]] = defval # initialize
# save the option metadata
_registered_options[key] = RegisteredOption(key=key, defval=defval,
- doc=doc, validator=validator,cb=cb)
+ doc=doc, validator=validator, cb=cb)
def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
@@ -466,7 +472,7 @@ def _is_deprecated(key):
""" Returns True if the given option has been deprecated """
key = key.lower()
- return _deprecated_options.has_key(key)
+ return key in _deprecated_options
def _get_deprecated_option(key):
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index aaf0584174f7e..11b5afcf39a52 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -1,5 +1,5 @@
import pandas.core.config as cf
-from pandas.core.config import is_int,is_bool,is_text,is_float
+from pandas.core.config import is_int, is_bool, is_text, is_float
from pandas.core.format import detect_console_encoding
"""
@@ -18,25 +18,25 @@
###########################################
# options from the "display" namespace
-pc_precision_doc="""
+pc_precision_doc = """
: int
Floating point output precision (number of significant digits). This is
only a suggestion
"""
-pc_colspace_doc="""
+pc_colspace_doc = """
: int
Default space for DataFrame columns, defaults to 12
"""
-pc_max_rows_doc="""
+pc_max_rows_doc = """
: int
This sets the maximum number of rows pandas should output when printing
out various output. For example, this value determines whether the repr()
for a dataframe prints out fully or just an summary repr.
"""
-pc_max_cols_doc="""
+pc_max_cols_doc = """
: int
max_rows and max_columns are used in __repr__() methods to decide if
to_string() or info() is used to render an object to a string.
@@ -45,48 +45,48 @@
columns that can fit on it.
"""
-pc_max_info_cols_doc="""
+pc_max_info_cols_doc = """
: int
max_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
"""
-pc_nb_repr_h_doc="""
+pc_nb_repr_h_doc = """
: boolean
When True (default), IPython notebook will use html representation for
pandas objects (if it is available).
"""
-pc_date_dayfirst_doc="""
+pc_date_dayfirst_doc = """
: boolean
When True, prints and parses dates with the day first, eg 20/01/2005
"""
-pc_date_yearfirst_doc="""
+pc_date_yearfirst_doc = """
: boolean
When True, prints and parses dates with the year first, eg 2005/01/20
"""
-pc_pprint_nest_depth="""
+pc_pprint_nest_depth = """
: int
Defaults to 3.
Controls the number of nested levels to process when pretty-printing
"""
-pc_multi_sparse_doc="""
+pc_multi_sparse_doc = """
: boolean
Default True, "sparsify" MultiIndex display (don't display repeated
elements in outer levels within groups)
"""
-pc_encoding_doc="""
+pc_encoding_doc = """
: str/unicode
Defaults to the detected encoding of the console.
Specifies the encoding to be used for strings returned by to_string,
these are generally strings meant to be displayed on the console.
"""
-float_format_doc="""
+float_format_doc = """
: callable
The callable should accept a floating point number and return
a string with the desired format of the number. This is used
@@ -95,19 +95,19 @@
"""
-max_colwidth_doc="""
+max_colwidth_doc = """
: int
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
placeholder is embedded in the output.
"""
-colheader_justify_doc="""
+colheader_justify_doc = """
: 'left'/'right'
Controls the justification of column headers. used by DataFrameFormatter.
"""
-pc_expand_repr_doc="""
+pc_expand_repr_doc = """
: boolean
Default False
Whether to print out the full DataFrame repr for wide DataFrames
@@ -115,7 +115,7 @@
If False, the summary representation is shown.
"""
-pc_line_width_doc="""
+pc_line_width_doc = """
: int
Default 80
When printing wide DataFrames, this is the width of each line.
@@ -143,11 +143,11 @@
cf.register_option('multi_sparse', True, pc_multi_sparse_doc,
validator=is_bool)
cf.register_option('encoding', detect_console_encoding(), pc_encoding_doc,
- validator=is_text)
+ validator=is_text)
cf.register_option('expand_frame_repr', True, pc_expand_repr_doc)
cf.register_option('line_width', 80, pc_line_width_doc)
-tc_sim_interactive_doc="""
+tc_sim_interactive_doc = """
: boolean
Default False
Whether to simulate interactive mode for purposes of testing
@@ -155,7 +155,7 @@
with cf.config_prefix('mode'):
cf.register_option('sim_interactive', False, tc_sim_interactive_doc)
-use_inf_as_null_doc="""
+use_inf_as_null_doc = """
: boolean
True means treat None, NaN, INF, -INF as null (old way),
False means None and NaN are null, but INF, -INF are not null
@@ -164,6 +164,8 @@
# we don't want to start importing evrything at the global context level
# or we'll hit circular deps.
+
+
def use_inf_as_null_cb(key):
from pandas.core.common import _use_inf_as_null
_use_inf_as_null(key)
diff --git a/pandas/core/daterange.py b/pandas/core/daterange.py
index bfed7fcc6a734..954d72defdbbb 100644
--- a/pandas/core/daterange.py
+++ b/pandas/core/daterange.py
@@ -18,7 +18,7 @@ def __new__(cls, start=None, end=None, periods=None,
import warnings
warnings.warn("DateRange is deprecated, use DatetimeIndex instead",
- FutureWarning)
+ FutureWarning)
if time_rule is None:
time_rule = kwds.get('timeRule')
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 862dea1ba5ed4..cc931545e6bd0 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -89,8 +89,9 @@ def _get_footer(self):
footer += ', '
series_name = com.pprint_thing(self.series.name,
- escape_chars=('\t','\r','\n'))
- footer += ("Name: %s" % series_name) if self.series.name is not None else ""
+ escape_chars=('\t', '\r', '\n'))
+ footer += ("Name: %s" %
+ series_name) if self.series.name is not None else ""
if self.length:
if footer:
@@ -153,6 +154,7 @@ def _encode_diff_func():
_encode_diff = lambda x: 0
else:
encoding = get_option("display.encoding")
+
def _encode_diff(x):
return len(x) - len(x.decode(encoding))
@@ -164,6 +166,7 @@ def _strlen_func():
_strlen = len
else:
encoding = get_option("display.encoding")
+
def _strlen(x):
try:
return len(x.decode(encoding))
@@ -172,8 +175,8 @@ def _strlen(x):
return _strlen
-class TableFormatter(object):
+class TableFormatter(object):
def _get_formatter(self, i):
if isinstance(self.formatters, (list, tuple)):
@@ -283,8 +286,9 @@ def to_string(self, force_unicode=None):
"""
import warnings
if force_unicode is not None: # pragma: no cover
- warnings.warn("force_unicode is deprecated, it will have no effect",
- FutureWarning)
+ warnings.warn(
+ "force_unicode is deprecated, it will have no effect",
+ FutureWarning)
frame = self.frame
@@ -337,8 +341,9 @@ def to_latex(self, force_unicode=None, column_format=None):
"""
import warnings
if force_unicode is not None: # pragma: no cover
- warnings.warn("force_unicode is deprecated, it will have no effect",
- FutureWarning)
+ warnings.warn(
+ "force_unicode is deprecated, it will have no effect",
+ FutureWarning)
frame = self.frame
@@ -400,7 +405,7 @@ def is_numeric_dtype(dtype):
str_columns = zip(*[[' ' + y
if y not in self.formatters and need_leadsp[x]
else y for y in x]
- for x in fmt_columns])
+ for x in fmt_columns])
if self.sparsify:
str_columns = _sparsify(str_columns)
@@ -498,8 +503,8 @@ def write(self, s, indent=0):
def write_th(self, s, indent=0, tags=None):
if (self.fmt.col_space is not None
- and self.fmt.col_space > 0 ):
- tags = (tags or "" )
+ and self.fmt.col_space > 0):
+ tags = (tags or "")
tags += 'style="min-width: %s;"' % self.fmt.col_space
return self._write_cell(s, kind='th', indent=indent, tags=tags)
@@ -512,7 +517,8 @@ def _write_cell(self, s, kind='td', indent=0, tags=None):
start_tag = '<%s %s>' % (kind, tags)
else:
start_tag = '<%s>' % kind
- self.write('%s%s</%s>' % (start_tag, com.pprint_thing(s), kind), indent)
+ self.write(
+ '%s%s</%s>' % (start_tag, com.pprint_thing(s), kind), indent)
def write_tr(self, line, indent=0, indent_delta=4, header=False,
align=None, tags=None):
@@ -585,7 +591,7 @@ def _column_header():
row.append('')
style = "text-align: %s;" % self.fmt.justify
row.extend([single_column_table(c, self.fmt.justify, style) for
- c in self.columns])
+ c in self.columns])
else:
if self.fmt.index:
row.append(self.columns.name or '')
@@ -629,7 +635,7 @@ def _column_header():
align = self.fmt.justify
self.write_tr(col_row, indent, self.indent_delta, header=True,
- align=align)
+ align=align)
if self.fmt.has_index_names:
row = [x if x is not None else ''
@@ -750,7 +756,7 @@ def grouper(x):
return result
-#from collections import namedtuple
+# from collections import namedtuple
# ExcelCell = namedtuple("ExcelCell",
# 'row, col, val, style, mergestart, mergeend')
@@ -759,7 +765,7 @@ class ExcelCell(object):
__slots__ = __fields__
def __init__(self, row, col, val,
- style=None, mergestart=None, mergeend=None):
+ style=None, mergestart=None, mergeend=None):
self.row = row
self.col = col
self.val = val
@@ -769,11 +775,11 @@ def __init__(self, row, col, val,
header_style = {"font": {"bold": True},
- "borders": {"top": "thin",
- "right": "thin",
- "bottom": "thin",
- "left": "thin"},
- "alignment": {"horizontal": "center"}}
+ "borders": {"top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin"},
+ "alignment": {"horizontal": "center"}}
class ExcelFormatter(object):
@@ -832,7 +838,7 @@ def _format_header_mi(self):
return
levels = self.columns.format(sparsify=True, adjoin=False,
- names=False)
+ names=False)
# level_lenghts = _get_level_lengths(levels)
coloffset = 1
if isinstance(self.df.index, MultiIndex):
@@ -847,12 +853,12 @@ def _format_header_mi(self):
# yield ExcelCell(lnum,coloffset + i + 1, values[i],
# header_style, lnum, coloffset + i + records[i])
# else:
- # yield ExcelCell(lnum, coloffset + i + 1, values[i], header_style)
+ # yield ExcelCell(lnum, coloffset + i + 1, values[i], header_style)
# self.rowcounter = lnum
- lnum=0
- for i, values in enumerate(zip(*levels)):
- v = ".".join(map(com.pprint_thing,values))
+ lnum = 0
+ for i, values in enumerate(zip(*levels)):
+ v = ".".join(map(com.pprint_thing, values))
yield ExcelCell(lnum, coloffset + i, v, header_style)
self.rowcounter = lnum
@@ -870,7 +876,7 @@ def _format_header_regular(self):
if has_aliases:
if len(self.header) != len(self.columns):
raise ValueError(('Writing %d cols but got %d aliases'
- % (len(self.columns), len(self.header))))
+ % (len(self.columns), len(self.header))))
else:
colnames = self.header
@@ -907,20 +913,20 @@ def _format_regular_rows(self):
self.rowcounter += 1
coloffset = 0
- #output index and index_label?
+ # output index and index_label?
if self.index:
- #chek aliases
- #if list only take first as this is not a MultiIndex
+ # chek aliases
+ # if list only take first as this is not a MultiIndex
if self.index_label and isinstance(self.index_label,
(list, tuple, np.ndarray)):
index_label = self.index_label[0]
- #if string good to go
+ # if string good to go
elif self.index_label and isinstance(self.index_label, str):
index_label = self.index_label
else:
index_label = self.df.index.names[0]
- if index_label and self.header != False:
+ if index_label and self.header is not False:
# add to same level as column names
# if isinstance(self.df.columns, MultiIndex):
# yield ExcelCell(self.rowcounter, 0,
@@ -928,9 +934,9 @@ def _format_regular_rows(self):
# self.rowcounter += 1
# else:
yield ExcelCell(self.rowcounter - 1, 0,
- index_label, header_style)
+ index_label, header_style)
- #write index_values
+ # write index_values
index_values = self.df.index
if isinstance(self.df.index, PeriodIndex):
index_values = self.df.index.to_timestamp()
@@ -950,16 +956,17 @@ def _format_hierarchical_rows(self):
self.rowcounter += 1
gcolidx = 0
- #output index and index_label?
+ # output index and index_label?
if self.index:
index_labels = self.df.index.names
- #check for aliases
+ # check for aliases
if self.index_label and isinstance(self.index_label,
(list, tuple, np.ndarray)):
index_labels = self.index_label
- #if index labels are not empty go ahead and dump
- if filter(lambda x: x is not None, index_labels) and self.header != False:
+ # if index labels are not empty go ahead and dump
+ if (filter(lambda x: x is not None, index_labels)
+ and self.header is not False):
# if isinstance(self.df.columns, MultiIndex):
# self.rowcounter += 1
# else:
@@ -981,7 +988,7 @@ def _format_hierarchical_rows(self):
yield ExcelCell(self.rowcounter + i, gcolidx + colidx, val)
def get_formatted_cells(self):
- for cell in itertools.chain(self._format_header(),self._format_body()
+ for cell in itertools.chain(self._format_header(), self._format_body()
):
cell.val = self._format_value(cell.val)
yield cell
@@ -1043,8 +1050,8 @@ def _format_strings(self):
else:
float_format = self.float_format
- formatter = (lambda x: com.pprint_thing(x,escape_chars=('\t','\r','\n'))) \
- if self.formatter is None else self.formatter
+ formatter = (lambda x: com.pprint_thing(x, escape_chars=('\t', '\r', '\n'))) \
+ if self.formatter is None else self.formatter
def _format(x):
if self.na_rep is not None and lib.checknull(x):
@@ -1197,7 +1204,7 @@ def _trim_zeros(str_floats, na_rep='NaN'):
def _cond(values):
non_na = [x for x in values if x != na_rep]
return (len(non_na) > 0 and all([x.endswith('0') for x in non_na]) and
- not(any([('e' in x) or ('E' in x) for x in non_na])))
+ not(any([('e' in x) or ('E' in x) for x in non_na])))
while _cond(trimmed):
trimmed = [x[:-1] if x != na_rep else x for x in trimmed]
@@ -1242,7 +1249,7 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
max_columns=None, colheader_justify=None,
max_colwidth=None, notebook_repr_html=None,
date_dayfirst=None, date_yearfirst=None,
- pprint_nest_depth=None,multi_sparse=None, encoding=None):
+ pprint_nest_depth=None, multi_sparse=None, encoding=None):
"""
Alter default behavior of DataFrame.toString
@@ -1276,7 +1283,7 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
"""
import warnings
warnings.warn("set_printoptions is deprecated, use set_option instead",
- FutureWarning)
+ FutureWarning)
if precision is not None:
set_option("display.precision", precision)
if column_space is not None:
@@ -1302,12 +1309,14 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
if encoding is not None:
set_option("display.encoding", encoding)
+
def reset_printoptions():
import warnings
warnings.warn("reset_printoptions is deprecated, use reset_option instead",
- FutureWarning)
+ FutureWarning)
reset_option("^display\.")
+
def detect_console_encoding():
"""
Try to find the most capable encoding supported by the console.
@@ -1321,17 +1330,18 @@ def detect_console_encoding():
except AttributeError:
pass
- if not encoding or encoding == 'ascii': # try again for something better
+ if not encoding or encoding == 'ascii': # try again for something better
try:
encoding = locale.getpreferredencoding()
except Exception:
pass
- if not encoding: # when all else fails. this will usually be "ascii"
+ if not encoding: # when all else fails. this will usually be "ascii"
encoding = sys.getdefaultencoding()
return encoding
+
class EngFormatter(object):
"""
Formats float values according to engineering format.
@@ -1346,19 +1356,19 @@ class EngFormatter(object):
-18: "a",
-15: "f",
-12: "p",
- -9: "n",
- -6: "u",
- -3: "m",
- 0: "",
- 3: "k",
- 6: "M",
- 9: "G",
- 12: "T",
- 15: "P",
- 18: "E",
- 21: "Z",
- 24: "Y"
- }
+ -9: "n",
+ -6: "u",
+ -3: "m",
+ 0: "",
+ 3: "k",
+ 6: "M",
+ 9: "G",
+ 12: "T",
+ 15: "P",
+ 18: "E",
+ 21: "Z",
+ 24: "Y"
+ }
def __init__(self, accuracy=None, use_eng_prefix=False):
self.accuracy = accuracy
@@ -1421,7 +1431,7 @@ def __call__(self, num):
formatted = format_str % (mant, prefix)
- return formatted #.strip()
+ return formatted # .strip()
def set_eng_float_format(precision=None, accuracy=3, use_eng_prefix=False):
@@ -1441,11 +1451,13 @@ def set_eng_float_format(precision=None, accuracy=3, use_eng_prefix=False):
set_option("display.float_format", EngFormatter(accuracy, use_eng_prefix))
set_option("display.column_space", max(12, accuracy + 9))
+
def _put_lines(buf, lines):
if any(isinstance(x, unicode) for x in lines):
lines = [unicode(x) for x in lines]
buf.write('\n'.join(lines))
+
def _binify(cols, width):
bins = []
curr_width = 0
@@ -1462,12 +1474,12 @@ def _binify(cols, width):
arr = np.array([746.03, 0.00, 5620.00, 1592.36])
# arr = np.array([11111111.1, 1.55])
# arr = [314200.0034, 1.4125678]
- arr = np.array([ 327763.3119, 345040.9076, 364460.9915, 398226.8688,
- 383800.5172, 433442.9262, 539415.0568, 568590.4108,
- 599502.4276, 620921.8593, 620898.5294, 552427.1093,
- 555221.2193, 519639.7059, 388175.7 , 379199.5854,
- 614898.25 , 504833.3333, 560600. , 941214.2857,
- 1134250. , 1219550. , 855736.85 , 1042615.4286,
- 722621.3043, 698167.1818, 803750. ])
+ arr = np.array([327763.3119, 345040.9076, 364460.9915, 398226.8688,
+ 383800.5172, 433442.9262, 539415.0568, 568590.4108,
+ 599502.4276, 620921.8593, 620898.5294, 552427.1093,
+ 555221.2193, 519639.7059, 388175.7, 379199.5854,
+ 614898.25, 504833.3333, 560600., 941214.2857,
+ 1134250., 1219550., 855736.85, 1042615.4286,
+ 722621.3043, 698167.1818, 803750.])
fmt = FloatArrayFormatter(arr, digits=7)
print fmt.get_result()
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1eaa009275808..d81ceaeec8a7a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -181,6 +181,8 @@ class DataConflictError(Exception):
#----------------------------------------------------------------------
# Factory helper methods
+
+
def _arith_method(op, name, default_axis='columns'):
def na_op(x, y):
try:
@@ -601,7 +603,7 @@ def _need_info_repr_(self):
if max_columns > 0:
if (len(self.index) <= max_rows and
- (len(self.columns) <= max_columns and expand_repr)):
+ (len(self.columns) <= max_columns and expand_repr)):
return False
else:
return True
@@ -618,7 +620,7 @@ def _need_info_repr_(self):
value = buf.getvalue()
if (max([len(l) for l in value.split('\n')]) > terminal_width
and com.in_interactive_session()
- and not expand_repr):
+ and not expand_repr):
return True
else:
return False
@@ -643,7 +645,7 @@ def __bytes__(self):
Yields a bytestring in both py2/py3.
"""
encoding = com.get_option("display.encoding")
- return self.__unicode__().encode(encoding , 'replace')
+ return self.__unicode__().encode(encoding, 'replace')
def __unicode__(self):
"""
@@ -768,18 +770,18 @@ def __contains__(self, key):
__sub__ = _arith_method(operator.sub, '__sub__', default_axis=None)
__mul__ = _arith_method(operator.mul, '__mul__', default_axis=None)
__truediv__ = _arith_method(operator.truediv, '__truediv__',
- default_axis=None)
+ default_axis=None)
__floordiv__ = _arith_method(operator.floordiv, '__floordiv__',
- default_axis=None)
+ default_axis=None)
__pow__ = _arith_method(operator.pow, '__pow__', default_axis=None)
__radd__ = _arith_method(_radd_compat, '__radd__', default_axis=None)
__rmul__ = _arith_method(operator.mul, '__rmul__', default_axis=None)
__rsub__ = _arith_method(lambda x, y: y - x, '__rsub__', default_axis=None)
__rtruediv__ = _arith_method(lambda x, y: y / x, '__rtruediv__',
- default_axis=None)
+ default_axis=None)
__rfloordiv__ = _arith_method(lambda x, y: y // x, '__rfloordiv__',
- default_axis=None)
+ default_axis=None)
__rpow__ = _arith_method(lambda x, y: y ** x, '__rpow__',
default_axis=None)
@@ -832,7 +834,7 @@ def dot(self, other):
if isinstance(other, (Series, DataFrame)):
common = self.columns.union(other.index)
if (len(common) > len(self.columns) or
- len(common) > len(other.index)):
+ len(common) > len(other.index)):
raise ValueError('matrices are not aligned')
left = self.reindex(columns=common, copy=False)
@@ -887,7 +889,7 @@ def from_dict(cls, data, orient='columns', dtype=None):
orient = orient.lower()
if orient == 'index':
if len(data) > 0:
- #TODO speed up Series case
+ # TODO speed up Series case
if isinstance(data.values()[0], (Series, dict)):
data = _from_nested_dict(data)
else:
@@ -918,7 +920,7 @@ def to_dict(self, outtype='dict'):
import warnings
if not self.columns.is_unique:
warnings.warn("DataFrame columns are not unique, some "
- "columns will be omitted.",UserWarning)
+ "columns will be omitted.", UserWarning)
if outtype.lower().startswith('d'):
return dict((k, v.to_dict()) for k, v in self.iteritems())
elif outtype.lower().startswith('l'):
@@ -1025,7 +1027,7 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
result_index = None
if index is not None:
if (isinstance(index, basestring) or
- not hasattr(index, "__iter__")):
+ not hasattr(index, "__iter__")):
i = columns.get_loc(index)
exclude.add(index)
result_index = Index(arrays[i], name=index)
@@ -1221,7 +1223,7 @@ def to_panel(self):
# only support this kind for now
if (not isinstance(self.index, MultiIndex) or
- len(self.index.levels) != 2):
+ len(self.index.levels) != 2):
raise AssertionError('Must have 2-level MultiIndex')
if not self.index.is_unique:
@@ -1264,8 +1266,8 @@ def to_panel(self):
to_wide = deprecate('to_wide', to_panel)
def _helper_csv(self, writer, na_rep=None, cols=None,
- header=True, index=True,
- index_label=None, float_format=None):
+ header=True, index=True,
+ index_label=None, float_format=None):
if cols is None:
cols = self.columns
@@ -1406,9 +1408,9 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
csvout = csv.writer(f, lineterminator=line_terminator,
delimiter=sep, quoting=quoting)
self._helper_csv(csvout, na_rep=na_rep,
- float_format=float_format, cols=cols,
- header=header, index=index,
- index_label=index_label)
+ float_format=float_format, cols=cols,
+ header=header, index=index,
+ index_label=index_label)
finally:
if close:
@@ -2014,7 +2016,7 @@ def _getitem_multilevel(self, key):
if len(result.columns) == 1:
top = result.columns[0]
if ((type(top) == str and top == '') or
- (type(top) == tuple and top[0] == '')):
+ (type(top) == tuple and top[0] == '')):
result = result['']
if isinstance(result, Series):
result = Series(result, index=self.index, name=key)
@@ -2163,7 +2165,7 @@ def _sanitize_column(self, key, value):
# special case for now
if (com.is_float_dtype(existing_piece) and
- com.is_integer_dtype(value)):
+ com.is_integer_dtype(value)):
value = value.astype(np.float64)
else:
@@ -2523,7 +2525,7 @@ def reindex(self, index=None, columns=None, method=None, level=None,
if (index is not None and columns is not None
and method is None and level is None
- and not self._is_mixed_type):
+ and not self._is_mixed_type):
return self._reindex_multi(index, columns, copy, fill_value)
if columns is not None:
@@ -3154,8 +3156,9 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False):
if inplace:
if axis == 1:
- self._data = self._data.reindex_items(self._data.items[indexer],
- copy=False)
+ self._data = self._data.reindex_items(
+ self._data.items[indexer],
+ copy=False)
elif axis == 0:
self._data = self._data.take(indexer)
@@ -3195,8 +3198,9 @@ def sortlevel(self, level=0, axis=0, ascending=True, inplace=False):
if inplace:
if axis == 1:
- self._data = self._data.reindex_items(self._data.items[indexer],
- copy=False)
+ self._data = self._data.reindex_items(
+ self._data.items[indexer],
+ copy=False)
elif axis == 0:
self._data = self._data.take(indexer)
@@ -3599,7 +3603,7 @@ def _combine_series_infer(self, other, func, fill_value=None):
"by default is deprecated. Please use "
"DataFrame.<op> to explicitly broadcast arithmetic "
"operations along the index"),
- FutureWarning)
+ FutureWarning)
return self._combine_match_index(other, func, fill_value)
else:
return self._combine_match_columns(other, func, fill_value)
@@ -3674,7 +3678,7 @@ def combine(self, other, func, fill_value=None):
result : DataFrame
"""
- other_idxlen = len(other.index) # save for compare
+ other_idxlen = len(other.index) # save for compare
this, other = self.align(other, copy=False)
new_index = this.index
@@ -3735,7 +3739,7 @@ def combine_first(self, other):
return self.combine(other, combiner)
def update(self, other, join='left', overwrite=True, filter_func=None,
- raise_conflict=False):
+ raise_conflict=False):
"""
Modify DataFrame in place using non-NA values from passed
DataFrame. Aligns on indices
@@ -4187,7 +4191,7 @@ def _apply_standard(self, func, axis, ignore_failures=False):
result = self._constructor(data=results, index=index)
result.rename(columns=dict(zip(range(len(res_index)), res_index)),
- inplace=True)
+ inplace=True)
if axis == 1:
result = result.T
@@ -4235,7 +4239,7 @@ def applymap(self, func):
-------
applied : DataFrame
"""
-
+
# if we have a dtype == 'M8[ns]', provide boxed values
def infer(x):
if x.dtype == 'M8[ns]':
@@ -4547,7 +4551,7 @@ def describe(self, percentile_width=50):
if len(numdata.columns) == 0:
return DataFrame(dict((k, v.describe())
for k, v in self.iteritems()),
- columns=self.columns)
+ columns=self.columns)
lb = .5 * (1. - percentile_width / 100.)
ub = 1. - lb
@@ -4781,7 +4785,7 @@ def mad(self, axis=0, skipna=True, level=None):
@Substitution(name='variance', shortname='var',
na_action=_doc_exclude_na, extras='')
@Appender(_stat_doc +
- """
+ """
Normalized by N-1 (unbiased estimator).
""")
def var(self, axis=0, skipna=True, level=None, ddof=1):
@@ -4794,7 +4798,7 @@ def var(self, axis=0, skipna=True, level=None, ddof=1):
@Substitution(name='standard deviation', shortname='std',
na_action=_doc_exclude_na, extras='')
@Appender(_stat_doc +
- """
+ """
Normalized by N-1 (unbiased estimator).
""")
def std(self, axis=0, skipna=True, level=None, ddof=1):
@@ -4932,7 +4936,7 @@ def _get_numeric_data(self):
return DataFrame(num_data, copy=False)
else:
if (self.values.dtype != np.object_ and
- not issubclass(self.values.dtype.type, np.datetime64)):
+ not issubclass(self.values.dtype.type, np.datetime64)):
return self
else:
return self.ix[:, []]
@@ -5351,7 +5355,7 @@ def extract_index(data):
if have_series:
if lengths[0] != len(index):
msg = ('array length %d does not match index length %d'
- % (lengths[0], len(index)))
+ % (lengths[0], len(index)))
raise ValueError(msg)
else:
index = Index(np.arange(lengths[0]))
@@ -5414,7 +5418,7 @@ def _to_arrays(data, columns, coerce_float=False, dtype=None):
return arrays, columns
if len(data) == 0:
- return [], [] # columns if columns is not None else []
+ return [], [] # columns if columns is not None else []
if isinstance(data[0], (list, tuple)):
return _list_to_arrays(data, columns, coerce_float=coerce_float,
dtype=dtype)
@@ -5557,15 +5561,17 @@ def _homogenize(data, index, dtype=None):
return homogenized
+
def _from_nested_dict(data):
# TODO: this should be seriously cythonized
new_data = OrderedDict()
for index, s in data.iteritems():
for col, v in s.iteritems():
- new_data[col]= new_data.get(col,OrderedDict())
+ new_data[col] = new_data.get(col, OrderedDict())
new_data[col][index] = v
return new_data
+
def _put_str(s, space):
return ('%s' % s)[:space].ljust(space)
@@ -5577,8 +5583,8 @@ def install_ipython_completers(): # pragma: no cover
@complete_object.when_type(DataFrame)
def complete_dataframe(obj, prev_completions):
- return prev_completions + [c for c in obj.columns \
- if isinstance(c, basestring) and py3compat.isidentifier(c)]
+ return prev_completions + [c for c in obj.columns
+ if isinstance(c, basestring) and py3compat.isidentifier(c)]
# Importing IPython brings in about 200 modules, so we want to avoid it unless
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 23b65d4d674a4..54d6c24051707 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -39,6 +39,7 @@
aggregated : DataFrame
"""
+
class GroupByError(Exception):
pass
@@ -563,7 +564,6 @@ def _get_splitter(self, data, axis=0, keep_internal=True):
return get_splitter(data, comp_ids, ngroups, axis=axis,
keep_internal=keep_internal)
-
def _get_group_keys(self):
if len(self.groupings) == 1:
return self.levels[0]
@@ -777,7 +777,7 @@ def aggregate(self, values, how, axis=0):
(counts > 0).view(np.uint8))
else:
result = lib.row_bool_subset_object(result,
- (counts > 0).view(np.uint8))
+ (counts > 0).view(np.uint8))
else:
result = result[counts > 0]
@@ -962,7 +962,6 @@ def get_iterator(self, data, axis=0):
inds = range(edge, n)
yield self.binlabels[-1], data.take(inds, axis=axis)
-
def apply(self, f, data, axis=0, keep_internal=False):
result_keys = []
result_values = []
@@ -1228,7 +1227,7 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True):
if (not any_callable and not all_in_columns
and not any_arraylike and match_axis_length
- and level is None):
+ and level is None):
keys = [com._asarray_tuplesafe(keys)]
if isinstance(level, (tuple, list)):
@@ -1759,9 +1758,9 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
if isinstance(values[0], np.ndarray):
if (isinstance(values[0], Series) and
- not _all_indexes_same([x.index for x in values])):
+ not _all_indexes_same([x.index for x in values])):
return self._concat_objects(keys, values,
- not_indexed_same=not_indexed_same)
+ not_indexed_same=not_indexed_same)
if self.axis == 0:
stacked_values = np.vstack([np.asarray(x)
@@ -2118,6 +2117,7 @@ class SeriesSplitter(DataSplitter):
def _chop(self, sdata, slice_obj):
return sdata._get_values(slice_obj)
+
class FrameSplitter(DataSplitter):
def __init__(self, data, labels, ngroups, axis=0, keep_internal=False):
@@ -2141,7 +2141,8 @@ def _chop(self, sdata, slice_obj):
if self.axis == 0:
return sdata[slice_obj]
else:
- return sdata._slice(slice_obj, axis=1) # ix[:, slice_obj]
+ return sdata._slice(slice_obj, axis=1) # ix[:, slice_obj]
+
class NDFrameSplitter(DataSplitter):
@@ -2201,6 +2202,8 @@ def get_group_index(label_list, shape):
return group_index
_INT64_MAX = np.iinfo(np.int64).max
+
+
def _int64_overflow_possible(shape):
the_prod = 1L
for x in shape:
@@ -2294,7 +2297,6 @@ def get_key(self, comp_id):
for table, level in izip(self.tables, self.levels))
-
def _get_indices_dict(label_list, keys):
shape = [len(x) for x in keys]
group_index = get_group_index(label_list, shape)
@@ -2369,6 +2371,7 @@ def _reorder_by_uniques(uniques, labels):
np.median: 'median'
}
+
def _is_numeric_dtype(dt):
typ = dt.type
return (issubclass(typ, (np.number, np.bool_))
@@ -2413,7 +2416,7 @@ def install_ipython_completers(): # pragma: no cover
@complete_object.when_type(DataFrameGroupBy)
def complete_dataframe(obj, prev_completions):
return prev_completions + [c for c in obj.obj.columns
- if isinstance(c, basestring) and py3compat.isidentifier(c)]
+ if isinstance(c, basestring) and py3compat.isidentifier(c)]
# Importing IPython brings in about 200 modules, so we want to avoid it unless
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 8ac5f6feb9d82..7d9bc7eccff2d 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -121,7 +121,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None):
return Int64Index(subarr.astype('i8'), name=name)
elif inferred != 'string':
if (inferred.startswith('datetime') or
- tslib.is_timestamp_array(subarr)):
+ tslib.is_timestamp_array(subarr)):
from pandas.tseries.index import DatetimeIndex
return DatetimeIndex(subarr, copy=copy, name=name)
@@ -162,7 +162,7 @@ def __bytes__(self):
Yields a bytestring in both py2/py3.
"""
encoding = com.get_option("display.encoding")
- return self.__unicode__().encode(encoding , 'replace')
+ return self.__unicode__().encode(encoding, 'replace')
def __unicode__(self):
"""
@@ -175,7 +175,7 @@ def __unicode__(self):
else:
data = self
- prepr = com.pprint_thing(data, escape_chars=('\t','\r','\n'))
+ prepr = com.pprint_thing(data, escape_chars=('\t', '\r', '\n'))
return '%s(%s, dtype=%s)' % (type(self).__name__, prepr, self.dtype)
def __repr__(self):
@@ -426,7 +426,7 @@ def format(self, name=False, formatter=None):
header = []
if name:
header.append(com.pprint_thing(self.name,
- escape_chars=('\t','\r','\n'))
+ escape_chars=('\t', '\r', '\n'))
if self.name is not None else '')
if formatter is not None:
@@ -447,7 +447,7 @@ def format(self, name=False, formatter=None):
values = lib.maybe_convert_objects(values, safe=1)
if values.dtype == np.object_:
- result = [com.pprint_thing(x,escape_chars=('\t','\r','\n'))
+ result = [com.pprint_thing(x, escape_chars=('\t', '\r', '\n'))
for x in values]
else:
result = _trim_front(format_array(values, None, justify='left'))
@@ -1300,7 +1300,8 @@ class MultiIndex(Index):
def __new__(cls, levels=None, labels=None, sortorder=None, names=None):
if len(levels) != len(labels):
- raise AssertionError('Length of levels and labels must be the same')
+ raise AssertionError(
+ 'Length of levels and labels must be the same')
if len(levels) == 0:
raise Exception('Must pass non-zero number of levels/labels')
@@ -1383,7 +1384,7 @@ def __bytes__(self):
Yields a bytestring in both py2/py3.
"""
encoding = com.get_option("display.encoding")
- return self.__unicode__().encode(encoding , 'replace')
+ return self.__unicode__().encode(encoding, 'replace')
def __unicode__(self):
"""
@@ -1402,7 +1403,7 @@ def __unicode__(self):
else:
values = self.values
- summary = com.pprint_thing(values, escape_chars=('\t','\r','\n'))
+ summary = com.pprint_thing(values, escape_chars=('\t', '\r', '\n'))
np.set_printoptions(threshold=options['threshold'])
@@ -1557,7 +1558,7 @@ def get_level_values(self, level):
values : ndarray
"""
num = self._get_level_number(level)
- unique_vals = self.levels[num] # .values
+ unique_vals = self.levels[num] # .values
labels = self.labels[num]
return unique_vals.take(labels)
@@ -1572,7 +1573,7 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
formatted = lev.take(lab).format(formatter=formatter)
else:
# weird all NA case
- formatted = [com.pprint_thing(x,escape_chars=('\t','\r','\n'))
+ formatted = [com.pprint_thing(x, escape_chars=('\t', '\r', '\n'))
for x in com.take_1d(lev.values, lab)]
stringified_levels.append(formatted)
@@ -1581,7 +1582,7 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
level = []
if names:
- level.append(com.pprint_thing(name,escape_chars=('\t','\r','\n'))
+ level.append(com.pprint_thing(name, escape_chars=('\t', '\r', '\n'))
if name is not None else '')
level.extend(np.array(lev, dtype=object))
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 53eb18c12f172..a2aca3be39811 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -10,9 +10,11 @@
# "null slice"
_NS = slice(None, None)
+
class IndexingError(Exception):
pass
+
class _NDFrameIndexer(object):
def __init__(self, obj):
@@ -36,7 +38,7 @@ def __getitem__(self, key):
def _get_label(self, label, axis=0):
# ueber-hack
if (isinstance(label, tuple) and
- isinstance(label[axis], slice)):
+ isinstance(label[axis], slice)):
raise IndexingError('no slices here')
@@ -111,7 +113,8 @@ def _setitem_with_indexer(self, indexer, value):
data = self.obj[item]
values = data.values
if np.prod(values.shape):
- value = com._possibly_cast_to_datetime(value,getattr(data,'dtype',None))
+ value = com._possibly_cast_to_datetime(
+ value, getattr(data, 'dtype', None))
values[plane_indexer] = value
except ValueError:
for item, v in zip(item_labels[het_idx], value):
@@ -251,7 +254,7 @@ def _multi_take(self, tup):
elif isinstance(self.obj, Panel4D):
conv = [self._convert_for_reindex(x, axis=i)
for i, x in enumerate(tup)]
- return self.obj.reindex(labels=tup[0],items=tup[1], major=tup[2], minor=tup[3])
+ return self.obj.reindex(labels=tup[0], items=tup[1], major=tup[2], minor=tup[3])
elif isinstance(self.obj, Panel):
conv = [self._convert_for_reindex(x, axis=i)
for i, x in enumerate(tup)]
@@ -312,7 +315,7 @@ def _getitem_lowerdim(self, tup):
# unfortunately need an odious kludge here because of
# DataFrame transposing convention
if (isinstance(section, DataFrame) and i > 0
- and len(new_key) == 2):
+ and len(new_key) == 2):
a, b = new_key
new_key = b, a
@@ -399,7 +402,7 @@ def _reindex(keys, level=None):
# this is not the most robust, but...
if (isinstance(labels, MultiIndex) and
- not isinstance(keyarr[0], tuple)):
+ not isinstance(keyarr[0], tuple)):
level = 0
else:
level = None
@@ -500,7 +503,7 @@ def _convert_to_indexer(self, obj, axis=0):
# this is not the most robust, but...
if (isinstance(labels, MultiIndex) and
- not isinstance(objarr[0], tuple)):
+ not isinstance(objarr[0], tuple)):
level = 0
_, indexer = labels.reindex(objarr, level=level)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 57844656bf113..e3031b58ff286 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -11,6 +11,7 @@
from pandas.util import py3compat
+
class Block(object):
"""
Canonical n-dimensional unit of homogeneous dtype contained in a pandas
@@ -64,8 +65,8 @@ def set_ref_items(self, ref_items, maybe_rename=True):
def __repr__(self):
shape = ' x '.join([com.pprint_thing(s) for s in self.shape])
name = type(self).__name__
- result = '%s: %s, %s, dtype %s' % (name, com.pprint_thing(self.items)
- , shape, self.dtype)
+ result = '%s: %s, %s, dtype %s' % (
+ name, com.pprint_thing(self.items), shape, self.dtype)
if py3compat.PY3:
return unicode(result)
return com.console_encode(result)
@@ -194,12 +195,12 @@ def split_block_at(self, item):
loc = self.items.get_loc(item)
if type(loc) == slice or type(loc) == int:
- mask = [True]*len(self)
+ mask = [True] * len(self)
mask[loc] = False
- else: # already a mask, inverted
+ else: # already a mask, inverted
mask = -loc
- for s,e in com.split_ranges(mask):
+ for s, e in com.split_ranges(mask):
yield make_block(self.values[s:e],
self.items[s:e].copy(),
self.ref_items)
@@ -270,7 +271,7 @@ def interpolate(self, method='pad', axis=0, inplace=False,
if missing is None:
mask = None
- else: # todo create faster fill func without masking
+ else: # todo create faster fill func without masking
mask = _mask_missing(transf(values), missing)
if method == 'pad':
@@ -323,7 +324,7 @@ def _can_hold_element(self, element):
def _try_cast(self, element):
try:
return float(element)
- except: # pragma: no cover
+ except: # pragma: no cover
return element
def should_store(self, value):
@@ -341,7 +342,7 @@ def _can_hold_element(self, element):
def _try_cast(self, element):
try:
return complex(element)
- except: # pragma: no cover
+ except: # pragma: no cover
return element
def should_store(self, value):
@@ -357,7 +358,7 @@ def _can_hold_element(self, element):
def _try_cast(self, element):
try:
return int(element)
- except: # pragma: no cover
+ except: # pragma: no cover
return element
def should_store(self, value):
@@ -373,7 +374,7 @@ def _can_hold_element(self, element):
def _try_cast(self, element):
try:
return bool(element)
- except: # pragma: no cover
+ except: # pragma: no cover
return element
def should_store(self, value):
@@ -396,6 +397,7 @@ def should_store(self, value):
_NS_DTYPE = np.dtype('M8[ns]')
+
class DatetimeBlock(Block):
_can_hold_na = True
@@ -467,11 +469,12 @@ def make_block(values, items, ref_items):
inferred_type = lib.infer_dtype(flat)
if inferred_type == 'datetime':
- # we have an object array that has been inferred as datetime, so convert it
+ # we have an object array that has been inferred as datetime, so
+ # convert it
try:
values = tslib.array_to_datetime(flat).reshape(values.shape)
klass = DatetimeBlock
- except: # it already object, so leave it
+ except: # it already object, so leave it
pass
if klass is None:
@@ -511,7 +514,6 @@ def __init__(self, blocks, axes, do_integrity_check=True):
'equal number of axes (%d)')
% (block.values.ndim, ndim))
-
if do_integrity_check:
self._verify_integrity()
@@ -674,12 +676,12 @@ def _get_clean_block_types(self, type_list):
except TypeError:
type_list = (type_list,)
- type_map = {int : IntBlock, float : FloatBlock,
- complex : ComplexBlock,
- np.datetime64 : DatetimeBlock,
- datetime : DatetimeBlock,
- bool : BoolBlock,
- object : ObjectBlock}
+ type_map = {int: IntBlock, float: FloatBlock,
+ complex: ComplexBlock,
+ np.datetime64: DatetimeBlock,
+ datetime: DatetimeBlock,
+ bool: BoolBlock,
+ object: ObjectBlock}
type_list = tuple([type_map.get(t, t) for t in type_list])
return type_list
@@ -890,14 +892,14 @@ def iget(self, i):
# ugh
try:
inds, = (self.items == item).nonzero()
- except AttributeError: #MultiIndex
+ except AttributeError: # MultiIndex
inds, = self.items.map(lambda x: x == item).nonzero()
_, block = self._find_block(item)
try:
binds, = (block.items == item).nonzero()
- except AttributeError: #MultiIndex
+ except AttributeError: # MultiIndex
binds, = block.items.map(lambda x: x == item).nonzero()
for j, (k, b) in enumerate(zip(inds, binds)):
@@ -925,8 +927,8 @@ def delete(self, item):
loc = self.items.get_loc(item)
self._delete_from_block(i, item)
- if com._is_bool_indexer(loc): # dupe keys may return mask
- loc = [i for i,v in enumerate(loc) if v]
+ if com._is_bool_indexer(loc): # dupe keys may return mask
+ loc = [i for i, v in enumerate(loc) if v]
new_items = self.items.delete(loc)
@@ -960,7 +962,8 @@ def _set_item(item, arr):
else:
subset = self.items[loc]
if len(value) != len(subset):
- raise AssertionError('Number of items to set did not match')
+ raise AssertionError(
+ 'Number of items to set did not match')
for i, (item, arr) in enumerate(zip(subset, value)):
_set_item(item, arr[None, :])
except KeyError:
@@ -1005,7 +1008,7 @@ def _add_new_block(self, item, value, loc=None):
# hm, elaborate hack?
if loc is None:
loc = self.items.get_loc(item)
- new_block = make_block(value, self.items[loc:loc+1].copy(),
+ new_block = make_block(value, self.items[loc:loc + 1].copy(),
self.items)
self.blocks.append(new_block)
@@ -1311,6 +1314,7 @@ def item_dtypes(self):
raise AssertionError('Some items were not in any block')
return result
+
def form_blocks(arrays, names, axes):
# pre-filter out items if we passed it
items = axes[0]
@@ -1344,7 +1348,7 @@ def form_blocks(arrays, names, axes):
elif issubclass(v.dtype.type, np.integer):
if v.dtype == np.uint64:
# HACK #2355 definite overflow
- if (v > 2**63 - 1).any():
+ if (v > 2 ** 63 - 1).any():
object_items.append((k, v))
continue
int_items.append((k, v))
@@ -1392,14 +1396,16 @@ def form_blocks(arrays, names, axes):
return blocks
+
def _simple_blockify(tuples, ref_items, dtype):
block_items, values = _stack_arrays(tuples, ref_items, dtype)
# CHECK DTYPE?
- if values.dtype != dtype: # pragma: no cover
+ if values.dtype != dtype: # pragma: no cover
values = values.astype(dtype)
return make_block(values, block_items, ref_items)
+
def _stack_arrays(tuples, ref_items, dtype):
from pandas.core.series import Series
@@ -1432,6 +1438,7 @@ def _shape_compat(x):
return items, stacked
+
def _blocks_to_series_dict(blocks, index=None):
from pandas.core.series import Series
@@ -1442,6 +1449,7 @@ def _blocks_to_series_dict(blocks, index=None):
series_dict[item] = Series(vec, index=index, name=item)
return series_dict
+
def _interleaved_dtype(blocks):
from collections import defaultdict
counts = defaultdict(lambda: 0)
@@ -1458,7 +1466,7 @@ def _interleaved_dtype(blocks):
if (have_object or
(have_bool and have_numeric) or
- (have_numeric and have_dt64)):
+ (have_numeric and have_dt64)):
return np.dtype(object)
elif have_bool:
return np.dtype(bool)
@@ -1471,6 +1479,7 @@ def _interleaved_dtype(blocks):
else:
return np.dtype('f8')
+
def _consolidate(blocks, items):
"""
Merge blocks having same dtype
@@ -1499,6 +1508,7 @@ def _merge_blocks(blocks, items):
new_block = make_block(new_values, new_items, items)
return new_block.reindex_items_from(items)
+
def _vstack(to_stack):
if all(x.dtype == _NS_DTYPE for x in to_stack):
# work around NumPy 1.6 bug
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index ba0dc91f4aa6f..1315fc3ce2b76 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -32,7 +32,8 @@ def f(values, axis=None, skipna=True, **kwds):
if values.ndim == 1:
return 0
else:
- result_shape = values.shape[:axis] + values.shape[axis + 1:]
+ result_shape = values.shape[:
+ axis] + values.shape[axis + 1:]
result = np.empty(result_shape)
result.fill(0)
return result
@@ -51,6 +52,7 @@ def f(values, axis=None, skipna=True, **kwds):
return f
+
def _bn_ok_dtype(dt):
# Bottleneck chokes on datetime64
return dt != np.object_ and not issubclass(dt.type, np.datetime64)
@@ -167,7 +169,7 @@ def _nanmin(values, axis=None, skipna=True):
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_
- and sys.version_info[0] >= 3): # pragma: no cover
+ and sys.version_info[0] >= 3): # pragma: no cover
import __builtin__
if values.ndim > 1:
apply_ax = axis if axis is not None else 0
@@ -176,7 +178,7 @@ def _nanmin(values, axis=None, skipna=True):
result = __builtin__.min(values)
else:
if ((axis is not None and values.shape[axis] == 0)
- or values.size == 0):
+ or values.size == 0):
result = com.ensure_float(values.sum(axis))
result.fill(np.nan)
else:
@@ -205,7 +207,7 @@ def _nanmax(values, axis=None, skipna=True):
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_
- and sys.version_info[0] >= 3): # pragma: no cover
+ and sys.version_info[0] >= 3): # pragma: no cover
import __builtin__
if values.ndim > 1:
@@ -215,7 +217,7 @@ def _nanmax(values, axis=None, skipna=True):
result = __builtin__.max(values)
else:
if ((axis is not None and values.shape[axis] == 0)
- or values.size == 0):
+ or values.size == 0):
result = com.ensure_float(values.sum(axis))
result.fill(np.nan)
else:
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 15bdc56690039..6b867f9a643db 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -97,7 +97,7 @@ def _arith_method(func, name):
def f(self, other):
if not np.isscalar(other):
raise ValueError('Simple arithmetic with %s can only be '
- 'done with scalar values' % self._constructor.__name__)
+ 'done with scalar values' % self._constructor.__name__)
return self._combine(other, func)
f.__name__ = name
@@ -116,7 +116,7 @@ def na_op(x, y):
yrav = y.ravel()
mask = notnull(xrav) & notnull(yrav)
result[mask] = func(np.array(list(xrav[mask])),
- np.array(list(yrav[mask])))
+ np.array(list(yrav[mask])))
else:
mask = notnull(xrav)
result[mask] = func(np.array(list(xrav[mask])), y)
@@ -134,34 +134,35 @@ def f(self, other):
if isinstance(other, self._constructor):
return self._compare_constructor(other, func)
elif isinstance(other, (self._constructor_sliced, DataFrame, Series)):
- raise Exception("input needs alignment for this object [%s]" % self._constructor)
+ raise Exception("input needs alignment for this object [%s]" %
+ self._constructor)
else:
return self._combine_const(other, na_op)
-
f.__name__ = name
return f
+
class Panel(NDFrame):
- _AXIS_ORDERS = ['items','major_axis','minor_axis']
- _AXIS_NUMBERS = dict([ (a,i) for i, a in enumerate(_AXIS_ORDERS) ])
- _AXIS_ALIASES = {
- 'major' : 'major_axis',
- 'minor' : 'minor_axis'
+ _AXIS_ORDERS = ['items', 'major_axis', 'minor_axis']
+ _AXIS_NUMBERS = dict([(a, i) for i, a in enumerate(_AXIS_ORDERS)])
+ _AXIS_ALIASES = {
+ 'major': 'major_axis',
+ 'minor': 'minor_axis'
}
- _AXIS_NAMES = dict([ (i,a) for i, a in enumerate(_AXIS_ORDERS) ])
+ _AXIS_NAMES = dict([(i, a) for i, a in enumerate(_AXIS_ORDERS)])
_AXIS_SLICEMAP = {
- 'major_axis' : 'index',
- 'minor_axis' : 'columns'
- }
- _AXIS_LEN = len(_AXIS_ORDERS)
+ 'major_axis': 'index',
+ 'minor_axis': 'columns'
+ }
+ _AXIS_LEN = len(_AXIS_ORDERS)
# major
_default_stat_axis = 1
# info axis
- _het_axis = 0
+ _het_axis = 0
_info_axis = _AXIS_ORDERS[_het_axis]
items = lib.AxisProperty(0)
@@ -175,22 +176,23 @@ def _constructor(self):
# return the type of the slice constructor
_constructor_sliced = DataFrame
- def _construct_axes_dict(self, axes = None, **kwargs):
+ def _construct_axes_dict(self, axes=None, **kwargs):
""" return an axes dictionary for myself """
- d = dict([ (a,getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+ d = dict([(a, getattr(self, a)) for a in (axes or self._AXIS_ORDERS)])
d.update(kwargs)
return d
@staticmethod
def _construct_axes_dict_from(self, axes, **kwargs):
""" return an axes dictionary for the passed axes """
- d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,axes) ])
+ d = dict([(a, ax) for a, ax in zip(self._AXIS_ORDERS, axes)])
d.update(kwargs)
return d
- def _construct_axes_dict_for_slice(self, axes = None, **kwargs):
+ def _construct_axes_dict_for_slice(self, axes=None, **kwargs):
""" return an axes dictionary for myself """
- d = dict([ (self._AXIS_SLICEMAP[a],getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+ d = dict([(self._AXIS_SLICEMAP[a], getattr(self, a))
+ for a in (axes or self._AXIS_ORDERS)])
d.update(kwargs)
return d
@@ -231,15 +233,16 @@ def __init__(self, data=None, items=None, major_axis=None, minor_axis=None,
copy : boolean, default False
Copy data from inputs. Only affects DataFrame / 2d ndarray input
"""
- self._init_data( data=data, items=items, major_axis=major_axis, minor_axis=minor_axis,
- copy=copy, dtype=dtype)
+ self._init_data(
+ data=data, items=items, major_axis=major_axis, minor_axis=minor_axis,
+ copy=copy, dtype=dtype)
def _init_data(self, data, copy, dtype, **kwargs):
""" generate ND initialization; axes are passed as required objects to __init__ """
if data is None:
data = {}
- passed_axes = [ kwargs.get(a) for a in self._AXIS_ORDERS ]
+ passed_axes = [kwargs.get(a) for a in self._AXIS_ORDERS]
axes = None
if isinstance(data, BlockManager):
if any(x is not None for x in passed_axes):
@@ -265,7 +268,7 @@ def _from_axes(cls, data, axes):
if isinstance(data, BlockManager):
return cls(data)
else:
- d = cls._construct_axes_dict_from(cls, axes, copy = False)
+ d = cls._construct_axes_dict_from(cls, axes, copy=False)
return cls(data, **d)
def _init_dict(self, data, axes, dtype=None):
@@ -289,7 +292,7 @@ def _init_dict(self, data, axes, dtype=None):
# shallow copy
arrays = []
- haxis_shape = [ len(a) for a in raxes ]
+ haxis_shape = [len(a) for a in raxes]
for h in haxis:
v = values = data.get(h)
if v is None:
@@ -304,7 +307,7 @@ def _init_dict(self, data, axes, dtype=None):
values = v.values
arrays.append(values)
- return self._init_arrays(arrays, haxis, [ haxis ] + raxes)
+ return self._init_arrays(arrays, haxis, [haxis] + raxes)
def _init_arrays(self, arrays, arr_names, axes):
# segregates dtypes and forms blocks matching to columns
@@ -314,7 +317,7 @@ def _init_arrays(self, arrays, arr_names, axes):
@property
def shape(self):
- return [ len(getattr(self,a)) for a in self._AXIS_ORDERS ]
+ return [len(getattr(self, a)) for a in self._AXIS_ORDERS]
@classmethod
def from_dict(cls, data, intersect=False, orient='items', dtype=None):
@@ -356,18 +359,19 @@ def from_dict(cls, data, intersect=False, orient='items', dtype=None):
return cls(**d)
def __getitem__(self, key):
- if isinstance(getattr(self,self._info_axis), MultiIndex):
+ if isinstance(getattr(self, self._info_axis), MultiIndex):
return self._getitem_multilevel(key)
return super(Panel, self).__getitem__(key)
def _getitem_multilevel(self, key):
- info = getattr(self,self._info_axis)
- loc = info.get_loc(key)
+ info = getattr(self, self._info_axis)
+ loc = info.get_loc(key)
if isinstance(loc, (slice, np.ndarray)):
- new_index = info[loc]
+ new_index = info[loc]
result_index = _maybe_droplevels(new_index, key)
- slices = [loc] + [slice(None) for x in range(self._AXIS_LEN-1)]
- new_values = self.values[slices]
+ slices = [loc] + [slice(None) for x in range(
+ self._AXIS_LEN - 1)]
+ new_values = self.values[slices]
d = self._construct_axes_dict(self._AXIS_ORDERS[1:])
d[self._info_axis] = result_index
@@ -405,14 +409,14 @@ def __array__(self, dtype=None):
return self.values
def __array_wrap__(self, result):
- d = self._construct_axes_dict(self._AXIS_ORDERS, copy = False)
+ d = self._construct_axes_dict(self._AXIS_ORDERS, copy=False)
return self._constructor(result, **d)
#----------------------------------------------------------------------
# Comparison methods
def _indexed_same(self, other):
- return all([ getattr(self,a).equals(getattr(other,a)) for a in self._AXIS_ORDERS ])
+ return all([getattr(self, a).equals(getattr(other, a)) for a in self._AXIS_ORDERS])
def _compare_constructor(self, other, func):
if not self._indexed_same(other):
@@ -420,10 +424,10 @@ def _compare_constructor(self, other, func):
'same type objects')
new_data = {}
- for col in getattr(self,self._info_axis):
+ for col in getattr(self, self._info_axis):
new_data[col] = func(self[col], other[col])
- d = self._construct_axes_dict(copy = False)
+ d = self._construct_axes_dict(copy=False)
return self._constructor(data=new_data, **d)
# boolean operators
@@ -475,7 +479,7 @@ def __bytes__(self):
Yields a bytestring in both py2/py3.
"""
encoding = com.get_option("display.encoding")
- return self.__unicode__().encode(encoding , 'replace')
+ return self.__unicode__().encode(encoding, 'replace')
def __unicode__(self):
"""
@@ -487,17 +491,18 @@ def __unicode__(self):
class_name = str(self.__class__)
shape = self.shape
- dims = u'Dimensions: %s' % ' x '.join([ "%d (%s)" % (s, a) for a,s in zip(self._AXIS_ORDERS,shape) ])
+ dims = u'Dimensions: %s' % ' x '.join(
+ ["%d (%s)" % (s, a) for a, s in zip(self._AXIS_ORDERS, shape)])
def axis_pretty(a):
- v = getattr(self,a)
+ v = getattr(self, a)
if len(v) > 0:
- return u'%s axis: %s to %s' % (a.capitalize(),com.pprint_thing(v[0]),com.pprint_thing(v[-1]))
+ return u'%s axis: %s to %s' % (a.capitalize(), com.pprint_thing(v[0]), com.pprint_thing(v[-1]))
else:
return u'%s axis: None' % a.capitalize()
-
- output = '\n'.join([class_name, dims] + [axis_pretty(a) for a in self._AXIS_ORDERS])
+ output = '\n'.join(
+ [class_name, dims] + [axis_pretty(a) for a in self._AXIS_ORDERS])
return output
def __repr__(self):
@@ -509,10 +514,10 @@ def __repr__(self):
return str(self)
def __iter__(self):
- return iter(getattr(self,self._info_axis))
+ return iter(getattr(self, self._info_axis))
def iteritems(self):
- for h in getattr(self,self._info_axis):
+ for h in getattr(self, self._info_axis):
yield h, self[h]
# Name that won't get automatically converted to items by 2to3. items is
@@ -546,7 +551,7 @@ def ix(self):
return self._ix
def _wrap_array(self, arr, axes, copy=False):
- d = self._construct_axes_dict_from(self, axes, copy = copy)
+ d = self._construct_axes_dict_from(self, axes, copy=copy)
return self._constructor(arr, **d)
fromDict = from_dict
@@ -592,7 +597,7 @@ def to_excel(self, path, na_rep=''):
# TODO: needed?
def keys(self):
- return list(getattr(self,self._info_axis))
+ return list(getattr(self, self._info_axis))
def _get_values(self):
self._consolidate_inplace()
@@ -642,19 +647,20 @@ def set_value(self, *args):
otherwise a new object
"""
# require an arg for each axis and the value
- assert(len(args) == self._AXIS_LEN+1)
+ assert(len(args) == self._AXIS_LEN + 1)
try:
frame = self._get_item_cache(args[0])
frame.set_value(*args[1:])
return self
except KeyError:
- axes = self._expand_axes(args)
- d = self._construct_axes_dict_from(self, axes, copy = False)
+ axes = self._expand_axes(args)
+ d = self._construct_axes_dict_from(self, axes, copy=False)
result = self.reindex(**d)
likely_dtype = com._infer_dtype(args[-1])
- made_bigger = not np.array_equal(axes[0], getattr(self,self._info_axis))
+ made_bigger = not np.array_equal(
+ axes[0], getattr(self, self._info_axis))
# how to make this logic simpler?
if made_bigger:
com._possibly_cast_item(result, args[0], likely_dtype)
@@ -668,7 +674,7 @@ def _box_item_values(self, key, values):
def __getattr__(self, name):
"""After regular attribute access, try looking up the name of an item.
This allows simpler access to items for interactive use."""
- if name in getattr(self,self._info_axis):
+ if name in getattr(self, self._info_axis):
return self[name]
raise AttributeError("'%s' object has no attribute '%s'" %
(type(self).__name__, name))
@@ -680,7 +686,8 @@ def _slice(self, slobj, axis=0):
def __setitem__(self, key, value):
shape = tuple(self.shape)
if isinstance(value, self._constructor_sliced):
- value = value.reindex(**self._construct_axes_dict_for_slice(self._AXIS_ORDERS[1:]))
+ value = value.reindex(
+ **self._construct_axes_dict_for_slice(self._AXIS_ORDERS[1:]))
mat = value.values
elif isinstance(value, np.ndarray):
assert(value.shape == shape[1:])
@@ -784,7 +791,7 @@ def reindex(self, major=None, minor=None, method=None,
major = _mut_exclusive(major, major_axis)
minor = _mut_exclusive(minor, minor_axis)
- al = self._AXIS_LEN
+ al = self._AXIS_LEN
# only allowing multi-index on Panel (and not > dims)
if (method is None and not self._is_mixed_type and al <= 3):
@@ -796,12 +803,12 @@ def reindex(self, major=None, minor=None, method=None,
pass
if major is not None:
- result = result._reindex_axis(major, method, al-2, copy)
+ result = result._reindex_axis(major, method, al - 2, copy)
if minor is not None:
- result = result._reindex_axis(minor, method, al-1, copy)
+ result = result._reindex_axis(minor, method, al - 1, copy)
- for i, a in enumerate(self._AXIS_ORDERS[0:al-2]):
+ for i, a in enumerate(self._AXIS_ORDERS[0:al - 2]):
a = kwargs.get(a)
if a is not None:
result = result._reindex_axis(a, method, i, copy)
@@ -880,7 +887,7 @@ def reindex_like(self, other, method=None):
-------
reindexed : Panel
"""
- d = other._construct_axes_dict(method = method)
+ d = other._construct_axes_dict(method=method)
return self.reindex(**d)
def dropna(self, axis=0, how='any'):
@@ -947,7 +954,7 @@ def _combine_frame(self, other, func, axis=0):
new_values = new_values.swapaxes(0, 2)
return self._constructor(new_values, self.items, self.major_axis,
- self.minor_axis)
+ self.minor_axis)
def _combine_panel(self, other, func):
items = self.items + other.items
@@ -1001,14 +1008,12 @@ def fillna(self, value=None, method=None):
new_data = self._data.fillna(value)
return self._constructor(new_data)
-
def ffill(self):
return self.fillna(method='ffill')
def bfill(self):
return self.fillna(method='bfill')
-
def major_xs(self, key, copy=True):
"""
Return slice of panel along major axis
@@ -1025,7 +1030,7 @@ def major_xs(self, key, copy=True):
y : DataFrame
index -> minor axis, columns -> items
"""
- return self.xs(key, axis=self._AXIS_LEN-2, copy=copy)
+ return self.xs(key, axis=self._AXIS_LEN - 2, copy=copy)
def minor_xs(self, key, copy=True):
"""
@@ -1043,7 +1048,7 @@ def minor_xs(self, key, copy=True):
y : DataFrame
index -> major axis, columns -> items
"""
- return self.xs(key, axis=self._AXIS_LEN-1, copy=copy)
+ return self.xs(key, axis=self._AXIS_LEN - 1, copy=copy)
def xs(self, key, axis=1, copy=True):
"""
@@ -1148,7 +1153,8 @@ def transpose(self, *args, **kwargs):
try:
kwargs[a] = args.pop(0)
except (IndexError):
- raise ValueError("not enough arguments specified to transpose!")
+ raise ValueError(
+ "not enough arguments specified to transpose!")
axes = [self._get_axis_number(kwargs[a]) for a in self._AXIS_ORDERS]
@@ -1156,7 +1162,8 @@ def transpose(self, *args, **kwargs):
if len(axes) != len(set(axes)):
raise ValueError('Must specify %s unique axes' % self._AXIS_LEN)
- new_axes = self._construct_axes_dict_from(self, [ self._get_axis(x) for x in axes])
+ new_axes = self._construct_axes_dict_from(
+ self, [self._get_axis(x) for x in axes])
new_values = self.values.transpose(tuple(axes))
if kwargs.get('copy') or (len(args) and args[-1]):
new_values = new_values.copy()
@@ -1266,9 +1273,10 @@ def _wrap_result(self, result, axis):
# do we have reduced dimensionalility?
if self.ndim == result.ndim:
return self._constructor(result, **self._construct_axes_dict())
- elif self.ndim == result.ndim+1:
+ elif self.ndim == result.ndim + 1:
return self._constructor_sliced(result, **self._extract_axes_for_slice(self, axes))
- raise PandasError("invalid _wrap_result [self->%s] [result->%s]" % (self.ndim,result.ndim))
+ raise PandasError("invalid _wrap_result [self->%s] [result->%s]" %
+ (self.ndim, result.ndim))
def count(self, axis='major'):
"""
@@ -1313,7 +1321,7 @@ def shift(self, lags, axis='major'):
vslicer = slice(None, -lags)
islicer = slice(lags, None)
elif lags == 0:
- vslicer = islicer =slice(None)
+ vslicer = islicer = slice(None)
else:
vslicer = slice(-lags, None)
islicer = slice(None, lags)
@@ -1428,8 +1436,8 @@ def update(self, other, join='left', overwrite=True, filter_func=None,
other = self._constructor(other)
axis = self._info_axis
- axis_values = getattr(self,axis)
- other = other.reindex(**{ axis : axis_values })
+ axis_values = getattr(self, axis)
+ other = other.reindex(**{axis: axis_values})
for frame in axis_values:
self[frame].update(other[frame], join, overwrite, filter_func,
@@ -1452,12 +1460,12 @@ def _get_join_index(self, other, how):
@staticmethod
def _extract_axes(self, data, axes, **kwargs):
""" return a list of the axis indicies """
- return [ self._extract_axis(self, data, axis=i, **kwargs) for i, a in enumerate(axes) ]
+ return [self._extract_axis(self, data, axis=i, **kwargs) for i, a in enumerate(axes)]
@staticmethod
def _extract_axes_for_slice(self, axes):
""" return the slice dictionary for these axes """
- return dict([ (self._AXIS_SLICEMAP[i], a) for i, a in zip(self._AXIS_ORDERS[self._AXIS_LEN-len(axes):],axes) ])
+ return dict([(self._AXIS_SLICEMAP[i], a) for i, a in zip(self._AXIS_ORDERS[self._AXIS_LEN - len(axes):], axes)])
@staticmethod
def _prep_ndarray(self, values, copy=True):
@@ -1496,10 +1504,12 @@ def _homogenize_dict(self, frames, intersect=True, dtype=None):
else:
adj_frames[k] = v
- axes = self._AXIS_ORDERS[1:]
- axes_dict = dict([ (a,ax) for a,ax in zip(axes,self._extract_axes(self, adj_frames, axes, intersect=intersect)) ])
+ axes = self._AXIS_ORDERS[1:]
+ axes_dict = dict([(a, ax) for a, ax in zip(axes, self._extract_axes(
+ self, adj_frames, axes, intersect=intersect))])
- reindex_dict = dict([ (self._AXIS_SLICEMAP[a],axes_dict[a]) for a in axes ])
+ reindex_dict = dict(
+ [(self._AXIS_SLICEMAP[a], axes_dict[a]) for a in axes])
reindex_dict['copy'] = False
for key, frame in adj_frames.iteritems():
if frame is not None:
@@ -1560,7 +1570,7 @@ def _add_aggregate_operations(cls):
Parameters
----------
-other : """ + "%s or %s" % (cls._constructor_sliced.__name__,cls.__name__) + """
+other : """ + "%s or %s" % (cls._constructor_sliced.__name__, cls.__name__) + """
axis : {""" + ', '.join(cls._AXIS_ORDERS) + "}" + """
Axis to broadcast over
@@ -1571,7 +1581,7 @@ def _add_aggregate_operations(cls):
def _panel_arith_method(op, name):
@Substitution(op)
@Appender(_agg_doc)
- def f(self, other, axis = 0):
+ def f(self, other, axis=0):
return self._combine(other, op, axis=axis)
f.__name__ = name
return f
@@ -1584,15 +1594,15 @@ def f(self, other, axis = 0):
cls.divide = cls.div = _panel_arith_method(operator.div, 'divide')
except AttributeError: # pragma: no cover
# Python 3
- cls.divide = cls.div = _panel_arith_method(operator.truediv, 'divide')
-
+ cls.divide = cls.div = _panel_arith_method(
+ operator.truediv, 'divide')
_agg_doc = """
Return %(desc)s over requested axis
Parameters
----------
-axis : {""" + ', '.join(cls._AXIS_ORDERS) + "} or {" + ', '.join([ str(i) for i in range(cls._AXIS_LEN) ]) + """}
+axis : {""" + ', '.join(cls._AXIS_ORDERS) + "} or {" + ', '.join([str(i) for i in range(cls._AXIS_LEN)]) + """}
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result
will be NA
@@ -1683,8 +1693,8 @@ def install_ipython_completers(): # pragma: no cover
@complete_object.when_type(Panel)
def complete_dataframe(obj, prev_completions):
- return prev_completions + [c for c in obj.keys() \
- if isinstance(c, basestring) and py3compat.isidentifier(c)]
+ return prev_completions + [c for c in obj.keys()
+ if isinstance(c, basestring) and py3compat.isidentifier(c)]
# Importing IPython brings in about 200 modules, so we want to avoid it unless
# we're in IPython (when those modules are loaded anyway).
diff --git a/pandas/core/panel4d.py b/pandas/core/panel4d.py
index 71e3e7d68e252..94c6941cb24f1 100644
--- a/pandas/core/panel4d.py
+++ b/pandas/core/panel4d.py
@@ -4,15 +4,14 @@
from pandas.core.panel import Panel
Panel4D = create_nd_panel_factory(
- klass_name = 'Panel4D',
- axis_orders = [ 'labels','items','major_axis','minor_axis'],
- axis_slices = { 'labels' : 'labels', 'items' : 'items',
- 'major_axis' : 'major_axis',
- 'minor_axis' : 'minor_axis' },
- slicer = Panel,
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
-
+ klass_name='Panel4D',
+ axis_orders=['labels', 'items', 'major_axis', 'minor_axis'],
+ axis_slices={'labels': 'labels', 'items': 'items',
+ 'major_axis': 'major_axis',
+ 'minor_axis': 'minor_axis'},
+ slicer=Panel,
+ axis_aliases={'major': 'major_axis', 'minor': 'minor_axis'},
+ stat_axis=2)
def panel4d_init(self, data=None, labels=None, items=None, major_axis=None,
@@ -39,5 +38,3 @@ def panel4d_init(self, data=None, labels=None, items=None, major_axis=None,
copy=copy, dtype=dtype)
Panel4D.__init__ = panel4d_init
-
-
diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py
index b7d2e29a3c79d..7c816b7beeea6 100644
--- a/pandas/core/panelnd.py
+++ b/pandas/core/panelnd.py
@@ -2,7 +2,8 @@
import pandas.lib as lib
-def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_aliases = None, stat_axis = 2):
+
+def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_aliases=None, stat_axis=2):
""" manufacture a n-d class:
parameters
@@ -11,7 +12,7 @@ def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_a
axis_orders : the names of the axes in order (highest to lowest)
axis_slices : a dictionary that defines how the axes map to the sliced axis
slicer : the class representing a slice of this panel
- axis_aliases: a dictionary defining aliases for various axes
+ axis_aliases: a dictionary defining aliases for various axes
default = { major : major_axis, minor : minor_axis }
stat_axis : the default statistic axis
default = 2
@@ -26,56 +27,57 @@ def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_a
"""
# if slicer is a name, get the object
- if isinstance(slicer,basestring):
+ if isinstance(slicer, basestring):
import pandas
try:
- slicer = getattr(pandas,slicer)
+ slicer = getattr(pandas, slicer)
except:
raise Exception("cannot create this slicer [%s]" % slicer)
# build the klass
- klass = type(klass_name, (slicer,),{})
+ klass = type(klass_name, (slicer,), {})
# add the class variables
- klass._AXIS_ORDERS = axis_orders
- klass._AXIS_NUMBERS = dict([ (a,i) for i, a in enumerate(axis_orders) ])
- klass._AXIS_ALIASES = axis_aliases or dict()
- klass._AXIS_NAMES = dict([ (i,a) for i, a in enumerate(axis_orders) ])
+ klass._AXIS_ORDERS = axis_orders
+ klass._AXIS_NUMBERS = dict([(a, i) for i, a in enumerate(axis_orders)])
+ klass._AXIS_ALIASES = axis_aliases or dict()
+ klass._AXIS_NAMES = dict([(i, a) for i, a in enumerate(axis_orders)])
klass._AXIS_SLICEMAP = axis_slices
- klass._AXIS_LEN = len(axis_orders)
+ klass._AXIS_LEN = len(axis_orders)
klass._default_stat_axis = stat_axis
- klass._het_axis = 0
- klass._info_axis = axis_orders[klass._het_axis]
+ klass._het_axis = 0
+ klass._info_axis = axis_orders[klass._het_axis]
klass._constructor_sliced = slicer
# add the axes
for i, a in enumerate(axis_orders):
- setattr(klass,a,lib.AxisProperty(i))
+ setattr(klass, a, lib.AxisProperty(i))
#### define the methods ####
def __init__(self, *args, **kwargs):
if not (kwargs.get('data') or len(args)):
- raise Exception("must supply at least a data argument to [%s]" % klass_name)
+ raise Exception(
+ "must supply at least a data argument to [%s]" % klass_name)
if 'copy' not in kwargs:
kwargs['copy'] = False
if 'dtype' not in kwargs:
kwargs['dtype'] = None
- self._init_data( *args, **kwargs)
+ self._init_data(*args, **kwargs)
klass.__init__ = __init__
def _get_plane_axes(self, axis):
- axis = self._get_axis_name(axis)
- index = self._AXIS_ORDERS.index(axis)
+ axis = self._get_axis_name(axis)
+ index = self._AXIS_ORDERS.index(axis)
planes = []
if index:
planes.extend(self._AXIS_ORDERS[0:index])
if index != self._AXIS_LEN:
- planes.extend(self._AXIS_ORDERS[index+1:])
+ planes.extend(self._AXIS_ORDERS[index + 1:])
- return [ getattr(self,p) for p in planes ]
+ return [getattr(self, p) for p in planes]
klass._get_plane_axes = _get_plane_axes
def _combine(self, other, func, axis=0):
@@ -89,27 +91,26 @@ def _combine_with_constructor(self, other, func):
# combine labels to form new axes
new_axes = []
for a in self._AXIS_ORDERS:
- new_axes.append(getattr(self,a) + getattr(other,a))
+ new_axes.append(getattr(self, a) + getattr(other, a))
# reindex: could check that everything's the same size, but forget it
- d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,new_axes) ])
+ d = dict([(a, ax) for a, ax in zip(self._AXIS_ORDERS, new_axes)])
d['copy'] = False
this = self.reindex(**d)
other = other.reindex(**d)
-
+
result_values = func(this.values, other.values)
return self._constructor(result_values, **d)
klass._combine_with_constructor = _combine_with_constructor
# set as NonImplemented operations which we don't support
- for f in ['to_frame','to_excel','to_sparse','groupby','join','filter','dropna','shift','take']:
+ for f in ['to_frame', 'to_excel', 'to_sparse', 'groupby', 'join', 'filter', 'dropna', 'shift', 'take']:
def func(self, *args, **kwargs):
raise NotImplementedError
- setattr(klass,f,func)
+ setattr(klass, f, func)
# add the aggregate operations
klass._add_aggregate_operations()
return klass
-
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 86923df06c376..f01ea567850e4 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -156,8 +156,8 @@ def get_new_values(self):
# is there a simpler / faster way of doing this?
for i in xrange(values.shape[1]):
- chunk = new_values[:, i * width : (i + 1) * width]
- mask_chunk = new_mask[:, i * width : (i + 1) * width]
+ chunk = new_values[:, i * width: (i + 1) * width]
+ mask_chunk = new_mask[:, i * width: (i + 1) * width]
chunk.flat[self.mask] = self.sorted_values[:, i]
mask_chunk.flat[self.mask] = True
@@ -394,9 +394,10 @@ def _unstack_frame(obj, level):
value_columns=obj.columns)
return unstacker.get_result()
+
def get_compressed_ids(labels, sizes):
# no overflow
- if _long_prod(sizes) < 2**63:
+ if _long_prod(sizes) < 2 ** 63:
group_index = get_group_index(labels, sizes)
comp_index, obs_ids = _compress_group_index(group_index)
else:
@@ -405,9 +406,9 @@ def get_compressed_ids(labels, sizes):
for v in labels:
mask |= v < 0
- while _long_prod(sizes) >= 2**63:
+ while _long_prod(sizes) >= 2 ** 63:
i = len(sizes)
- while _long_prod(sizes[:i]) >= 2**63:
+ while _long_prod(sizes[:i]) >= 2 ** 63:
i -= 1
rem_index, rem_ids = get_compressed_ids(labels[:i],
@@ -419,12 +420,14 @@ def get_compressed_ids(labels, sizes):
return comp_index, obs_ids
+
def _long_prod(vals):
result = 1L
for x in vals:
result *= x
return result
+
def stack(frame, level=-1, dropna=True):
"""
Convert DataFrame to Series with multi-level Index. Columns become the
@@ -793,6 +796,7 @@ def block2d_to_block3d(values, items, shape, major_labels, minor_labels,
return make_block(pvalues, items, ref_items)
+
def block2d_to_blocknd(values, items, shape, labels, ref_items=None):
""" pivot to the labels shape """
from pandas.core.internals import make_block
@@ -802,7 +806,7 @@ def block2d_to_blocknd(values, items, shape, labels, ref_items=None):
# Create observation selection vector using major and minor
# labels, for converting to panel format.
- selector = factor_indexer(shape[1:],labels)
+ selector = factor_indexer(shape[1:], labels)
mask = np.zeros(np.prod(shape), dtype=bool)
mask.put(selector, True)
@@ -822,7 +826,8 @@ def block2d_to_blocknd(values, items, shape, labels, ref_items=None):
return make_block(pvalues, items, ref_items)
+
def factor_indexer(shape, labels):
""" given a tuple of shape and a list of Factor lables, return the expanded label indexer """
- mult = np.array(shape)[::-1].cumprod()[::-1]
- return np.sum(np.array(labels).T * np.append(mult,[1]), axis=1).T
+ mult = np.array(shape)[::-1].cumprod()[::-1]
+ return np.sum(np.array(labels).T * np.append(mult, [1]), axis=1).T
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 9e0e0571239c3..56b6e06844b86 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -172,7 +172,7 @@ def na_op(x, y):
if isinstance(y, np.ndarray):
if (x.dtype == np.bool_ and
- y.dtype == np.bool_): # pragma: no cover
+ y.dtype == np.bool_): # pragma: no cover
result = op(x, y) # when would this be hit?
else:
x = com._ensure_object(x)
@@ -644,7 +644,7 @@ def __setitem__(self, key, value):
return
except KeyError:
if (com.is_integer(key)
- and not self.index.inferred_type == 'integer'):
+ and not self.index.inferred_type == 'integer'):
values[key] = value
return
@@ -768,7 +768,7 @@ def convert_objects(self, convert_dates=True):
"""
if self.dtype == np.object_:
return Series(lib.maybe_convert_objects(
- self, convert_datetime=convert_dates), self.index)
+ self, convert_datetime=convert_dates), self.index)
return self
def repeat(self, reps):
@@ -932,7 +932,6 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
return df.reset_index(level=level, drop=drop)
-
def __str__(self):
"""
Return a string representation for a particular DataFrame
@@ -953,7 +952,7 @@ def __bytes__(self):
Yields a bytestring in both py2/py3.
"""
encoding = com.get_option("display.encoding")
- return self.__unicode__().encode(encoding , 'replace')
+ return self.__unicode__().encode(encoding, 'replace')
def __unicode__(self):
"""
@@ -1001,7 +1000,8 @@ def _tidy_repr(self, max_vals=20):
return unicode(result)
def _repr_footer(self):
- namestr = u"Name: %s, " % com.pprint_thing(self.name) if self.name is not None else ""
+ namestr = u"Name: %s, " % com.pprint_thing(
+ self.name) if self.name is not None else ""
return u'%sLength: %d' % (namestr, len(self))
def to_string(self, buf=None, na_rep='NaN', float_format=None,
@@ -1341,7 +1341,7 @@ def max(self, axis=None, out=None, skipna=True, level=None):
@Substitution(name='standard deviation', shortname='stdev',
na_action=_doc_exclude_na, extras='')
@Appender(_stat_doc +
- """
+ """
Normalized by N-1 (unbiased estimator).
""")
def std(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
@@ -1354,7 +1354,7 @@ def std(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
@Substitution(name='variance', shortname='var',
na_action=_doc_exclude_na, extras='')
@Appender(_stat_doc +
- """
+ """
Normalized by N-1 (unbiased estimator).
""")
def var(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
@@ -1632,10 +1632,11 @@ def pretty_name(x):
names = ['count']
data = [self.count()]
names += ['mean', 'std', 'min', pretty_name(lb), '50%',
- pretty_name(ub), 'max']
+ pretty_name(ub), 'max']
data += [self.mean(), self.std(), self.min(),
- self.quantile(lb), self.median(), self.quantile(ub),
- self.max()]
+ self.quantile(
+ lb), self.median(), self.quantile(ub),
+ self.max()]
return Series(data, index=names)
@@ -1932,7 +1933,7 @@ def sort(self, axis=0, kind='quicksort', order=None):
true_base = true_base.base
if (true_base is not None and
- (true_base.ndim != 1 or true_base.shape != self.shape)):
+ (true_base.ndim != 1 or true_base.shape != self.shape)):
raise Exception('This Series is a view of some other array, to '
'sort in-place you must create a copy')
@@ -2357,7 +2358,7 @@ def reindex(self, index=None, method=None, level=None, fill_value=np.nan,
return Series(nan, index=index, name=self.name)
new_index, indexer = self.index.reindex(index, method=method,
- level=level, limit=limit)
+ level=level, limit=limit)
new_values = com.take_1d(self.values, indexer, fill_value=fill_value)
return Series(new_values, index=new_index, name=self.name)
@@ -3014,7 +3015,7 @@ def _try_cast(arr):
subarr = data.copy()
else:
if (com.is_datetime64_dtype(data.dtype) and
- not com.is_datetime64_dtype(dtype)):
+ not com.is_datetime64_dtype(dtype)):
if dtype == object:
ints = np.asarray(data).view('i8')
subarr = tslib.ints_to_pydatetime(ints)
@@ -3058,7 +3059,7 @@ def _try_cast(arr):
subarr = np.empty(len(index), dtype=dtype)
else:
# need to possibly convert the value here
- value = com._possibly_cast_to_datetime(value, dtype)
+ value = com._possibly_cast_to_datetime(value, dtype)
subarr = np.empty(len(index), dtype=dtype)
subarr.fill(value)
else:
@@ -3145,7 +3146,8 @@ def _repr_footer(self):
else:
freqstr = ''
- namestr = "Name: %s, " % str(self.name) if self.name is not None else ""
+ namestr = "Name: %s, " % str(
+ self.name) if self.name is not None else ""
return '%s%sLength: %d' % (freqstr, namestr, len(self))
def to_timestamp(self, freq=None, how='start', copy=True):
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 29553afc65d28..cac9e84412cfa 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -247,6 +247,7 @@ def str_replace(arr, pat, repl, n=-1, case=True, flags=0):
flags |= re.IGNORECASE
regex = re.compile(pat, flags=flags)
n = n if n >= 0 else 0
+
def f(x):
return regex.sub(repl, x, count=n)
else:
diff --git a/pandas/io/auth.py b/pandas/io/auth.py
index 471436cb1b6bf..a44618eba8921 100644
--- a/pandas/io/auth.py
+++ b/pandas/io/auth.py
@@ -24,12 +24,14 @@
import oauth2client.tools as tools
OOB_CALLBACK_URN = oauth.OOB_CALLBACK_URN
+
class AuthenticationConfigError(ValueError):
pass
FLOWS = {}
FLAGS = gflags.FLAGS
-DEFAULT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secrets.json')
+DEFAULT_SECRETS = os.path.join(
+ os.path.dirname(__file__), 'client_secrets.json')
DEFAULT_SCOPE = 'https://www.googleapis.com/auth/analytics.readonly'
DEFAULT_TOKEN_FILE = os.path.join(os.path.dirname(__file__), 'analytics.dat')
MISSING_CLIENT_MSG = """
@@ -53,6 +55,7 @@ class AuthenticationConfigError(ValueError):
# the API without having to login each time. Make sure this file is in
# a secure place.
+
def process_flags(flags=[]):
"""Uses the command-line flags to set the logging level.
@@ -62,14 +65,15 @@ def process_flags(flags=[]):
# Let the gflags module process the command-line arguments.
try:
- FLAGS(flags)
+ FLAGS(flags)
except gflags.FlagsError, e:
- print '%s\nUsage: %s ARGS\n%s' % (e, str(flags), FLAGS)
- sys.exit(1)
+ print '%s\nUsage: %s ARGS\n%s' % (e, str(flags), FLAGS)
+ sys.exit(1)
# Set the logging according to the command-line flag.
logging.getLogger().setLevel(getattr(logging, FLAGS.logging_level))
+
def get_flow(secret, scope, redirect):
"""
Retrieve an authentication flow object based on the given
@@ -88,12 +92,14 @@ def get_flow(secret, scope, redirect):
FLOWS[key] = flow
return flow
+
def make_token_store(fpath=None):
"""create token storage from give file name"""
if fpath is None:
fpath = DEFAULT_TOKEN_FILE
return auth_file.Storage(fpath)
+
def authenticate(flow, storage=None):
"""
Try to retrieve a valid set of credentials from the token store if possible
@@ -115,6 +121,7 @@ def authenticate(flow, storage=None):
http = credentials.authorize(http)
return http
+
def init_service(http):
"""
Use the given http object to build the analytics service object
diff --git a/pandas/io/data.py b/pandas/io/data.py
index 7fca9a6d7867c..e4457d141e92c 100644
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -18,7 +18,7 @@
def DataReader(name, data_source=None, start=None, end=None,
- retry_count=3, pause=0):
+ retry_count=3, pause=0):
"""
Imports data from a number of online sources.
@@ -55,7 +55,7 @@ def DataReader(name, data_source=None, start=None, end=None,
if(data_source == "yahoo"):
return get_data_yahoo(name=name, start=start, end=end,
- retry_count=retry_count, pause=pause)
+ retry_count=retry_count, pause=pause)
elif(data_source == "fred"):
return get_data_fred(name=name, start=start, end=end)
elif(data_source == "famafrench"):
@@ -80,16 +80,17 @@ def get_quote_yahoo(symbols):
Returns a DataFrame
"""
if not isinstance(symbols, list):
- raise TypeError, "symbols must be a list"
+ raise TypeError("symbols must be a list")
# for codes see: http://www.gummy-stuff.org/Yahoo-data.htm
codes = {'symbol': 's', 'last': 'l1', 'change_pct': 'p2', 'PE': 'r',
'time': 't1', 'short_ratio': 's7'}
- request = str.join('',codes.values()) # code request string
+ request = str.join('', codes.values()) # code request string
header = codes.keys()
data = dict(zip(codes.keys(), [[] for i in range(len(codes))]))
- urlStr = 'http://finance.yahoo.com/d/quotes.csv?s=%s&f=%s' % (str.join('+', symbols), request)
+ urlStr = 'http://finance.yahoo.com/d/quotes.csv?s=%s&f=%s' % (
+ str.join('+', symbols), request)
try:
lines = urllib2.urlopen(urlStr).readlines()
@@ -132,14 +133,14 @@ def get_data_yahoo(name=None, start=None, end=None, retry_count=3, pause=0):
yahoo_URL = 'http://ichart.yahoo.com/table.csv?'
url = yahoo_URL + 's=%s' % name + \
- '&a=%s' % (start.month - 1) + \
- '&b=%s' % start.day + \
- '&c=%s' % start.year + \
- '&d=%s' % (end.month - 1) + \
- '&e=%s' % end.day + \
- '&f=%s' % end.year + \
- '&g=d' + \
- '&ignore=.csv'
+ '&a=%s' % (start.month - 1) + \
+ '&b=%s' % start.day + \
+ '&c=%s' % start.year + \
+ '&d=%s' % (end.month - 1) + \
+ '&e=%s' % end.day + \
+ '&f=%s' % end.year + \
+ '&g=d' + \
+ '&ignore=.csv'
for _ in range(retry_count):
resp = urllib2.urlopen(url)
@@ -178,7 +179,7 @@ def get_data_fred(name=None, start=dt.datetime(2010, 1, 1),
fred_URL = "http://research.stlouisfed.org/fred2/series/"
url = fred_URL + '%s' % name + \
- '/downloaddata/%s' % name + '.csv'
+ '/downloaddata/%s' % name + '.csv'
data = read_csv(urllib.urlopen(url), index_col=0, parse_dates=True,
header=None, skiprows=1, names=["DATE", name])
return data.truncate(start, end)
@@ -198,15 +199,19 @@ def get_data_famafrench(name, start=None, end=None):
datasets = {}
for i in range(len(file_edges) - 1):
- dataset = [d.split() for d in data[(file_edges[i] + 1):file_edges[i + 1]]]
+ dataset = [d.split() for d in data[(file_edges[i] + 1):
+ file_edges[i + 1]]]
if(len(dataset) > 10):
ncol = np.median(np.array([len(d) for d in dataset]))
- header_index = np.where(np.array([len(d) for d in dataset]) == (ncol - 1))[0][-1]
+ header_index = np.where(
+ np.array([len(d) for d in dataset]) == (ncol - 1))[0][-1]
header = dataset[header_index]
# to ensure the header is unique
header = [str(j + 1) + " " + header[j] for j in range(len(header))]
- index = np.array([d[0] for d in dataset[(header_index + 1):]], dtype=int)
- dataset = np.array([d[1:] for d in dataset[(header_index + 1):]], dtype=float)
+ index = np.array(
+ [d[0] for d in dataset[(header_index + 1):]], dtype=int)
+ dataset = np.array(
+ [d[1:] for d in dataset[(header_index + 1):]], dtype=float)
datasets[i] = DataFrame(dataset, index, columns=header)
return datasets
diff --git a/pandas/io/ga.py b/pandas/io/ga.py
index a433a4add7478..bcaf6bd6ec758 100644
--- a/pandas/io/ga.py
+++ b/pandas/io/ga.py
@@ -86,6 +86,7 @@
Local host redirect if unspecified
"""
+
@Substitution(extras=_AUTH_PARAMS)
@Appender(_GA_READER_DOC)
def read_ga(metrics, dimensions, start_date, **kwargs):
@@ -95,6 +96,7 @@ def read_ga(metrics, dimensions, start_date, **kwargs):
return reader.get_data(metrics=metrics, start_date=start_date,
dimensions=dimensions, **kwargs)
+
class OAuthDataReader(object):
"""
Abstract class for handling OAuth2 authentication using the Google
@@ -280,7 +282,6 @@ def _read(start, result_size):
raise ValueError('Google API error %s: %s' % (inst.resp.status,
inst._get_reason()))
-
if chunksize is None:
return _read(start_index, max_results)
@@ -333,7 +334,7 @@ def format_query(ids, metrics, start_date, end_date=None, dimensions=None,
max_results=10000, **kwargs):
if isinstance(metrics, basestring):
metrics = [metrics]
- met =','.join(['ga:%s' % x for x in metrics])
+ met = ','.join(['ga:%s' % x for x in metrics])
start_date = pd.to_datetime(start_date).strftime('%Y-%m-%d')
if end_date is None:
@@ -358,6 +359,7 @@ def format_query(ids, metrics, start_date, end_date=None, dimensions=None,
return qry
+
def _maybe_add_arg(query, field, data):
if data is not None:
if isinstance(data, basestring):
@@ -365,6 +367,7 @@ def _maybe_add_arg(query, field, data):
data = ','.join(['ga:%s' % x for x in data])
query[field] = data
+
def _get_match(obj_store, name, id, **kwargs):
key, val = None, None
if len(kwargs) > 0:
@@ -385,6 +388,7 @@ def _get_match(obj_store, name, id, **kwargs):
if name_ok(item) or id_ok(item) or key_ok(item):
return item
+
def _clean_index(index_dims, parse_dates):
_should_add = lambda lst: pd.Index(lst).isin(index_dims).all()
to_remove = []
@@ -413,17 +417,21 @@ def _clean_index(index_dims, parse_dates):
def _get_col_names(header_info):
return [x['name'][3:] for x in header_info]
+
def _get_column_types(header_info):
return [(x['name'][3:], x['columnType']) for x in header_info]
+
def _get_dim_names(header_info):
return [x['name'][3:] for x in header_info
if x['columnType'] == u'DIMENSION']
+
def _get_met_names(header_info):
return [x['name'][3:] for x in header_info
if x['columnType'] == u'METRIC']
+
def _get_data_types(header_info):
return [(x['name'][3:], TYPE_MAP.get(x['dataType'], object))
for x in header_info]
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index c24289fe2d063..c6a904b931c98 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -168,6 +168,7 @@ def _is_url(url):
else:
return False
+
def _read(filepath_or_buffer, kwds):
"Generic reader of line files."
encoding = kwds.get('encoding', None)
@@ -412,7 +413,7 @@ def read_fwf(filepath_or_buffer, colspecs=None, widths=None, **kwds):
if widths is not None:
colspecs, col = [], 0
for w in widths:
- colspecs.append((col, col+w))
+ colspecs.append((col, col + w))
col += w
kwds['colspecs'] = colspecs
@@ -420,7 +421,6 @@ def read_fwf(filepath_or_buffer, colspecs=None, widths=None, **kwds):
return _read(filepath_or_buffer, kwds)
-
def read_clipboard(**kwargs): # pragma: no cover
"""
Read text from clipboard and pass to read_table. See read_table for the
@@ -647,6 +647,7 @@ def _create_index(self, col_dict, columns):
def _is_index_col(col):
return col is not None and col is not False
+
class ParserBase(object):
def __init__(self, kwds):
@@ -786,7 +787,6 @@ def _agg_index(self, index, try_parse_dates=True):
return index
-
def _convert_to_ndarrays(self, dct, na_values, verbose=False,
converters=None):
result = {}
@@ -904,7 +904,7 @@ def __init__(self, src, **kwds):
if not self._has_complex_date_col:
if (self._reader.leading_cols == 0 and
- _is_index_col(self.index_col)):
+ _is_index_col(self.index_col)):
self._name_processed = True
(self.index_names, self.names,
@@ -1022,7 +1022,6 @@ def _maybe_parse_dates(self, values, index, try_parse_dates=True):
return values
-
def TextParser(*args, **kwds):
"""
Converts lists of lists/tuples into DataFrames with proper type inference
@@ -1085,6 +1084,7 @@ def TextParser(*args, **kwds):
def count_empty_vals(vals):
return sum([1 for v in vals if v == '' or v is None])
+
def _wrap_compressed(f, compression):
compression = compression.lower()
if compression == 'gzip':
@@ -1096,6 +1096,7 @@ def _wrap_compressed(f, compression):
raise ValueError('do not recognize compression method %s'
% compression)
+
class PythonParser(ParserBase):
def __init__(self, f, **kwds):
@@ -1136,7 +1137,6 @@ def __init__(self, f, **kwds):
self.comment = kwds['comment']
self._comment_lines = []
-
if isinstance(f, basestring):
f = com._get_handle(f, 'r', encoding=self.encoding,
compression=self.compression)
@@ -1243,13 +1243,13 @@ def read(self, rows=None):
self._first_chunk = False
columns = list(self.orig_names)
- if len(content) == 0: # pragma: no cover
+ if len(content) == 0: # pragma: no cover
# DataFrame with the right metadata, even though it's length 0
return _get_empty_meta(self.orig_names,
self.index_col,
self.index_names)
- #handle new style for names in index
+ # handle new style for names in index
count_empty_content_vals = count_empty_vals(content[0])
indexnamerow = None
if self.has_index_names and count_empty_content_vals == len(columns):
@@ -1366,7 +1366,7 @@ def _check_comments(self, lines):
rl = []
for x in l:
if (not isinstance(x, basestring) or
- self.comment not in x):
+ self.comment not in x):
rl.append(x)
else:
x = x[:x.find(self.comment)]
@@ -1386,7 +1386,7 @@ def _check_thousands(self, lines):
for x in l:
if (not isinstance(x, basestring) or
self.thousands not in x or
- nonnum.search(x.strip())):
+ nonnum.search(x.strip())):
rl.append(x)
else:
rl.append(x.replace(',', ''))
@@ -1707,7 +1707,6 @@ def _get_na_values(col, na_values):
return na_values
-
def _get_col_names(colspec, columns):
colset = set(columns)
colnames = []
@@ -1859,7 +1858,7 @@ def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
# has_index_names: boolean, default False
# True if the cols defined in index_col have an index name and are
# not in the header
- has_index_names=False # removed as new argument of API function
+ has_index_names = False # removed as new argument of API function
skipfooter = kwds.pop('skipfooter', None)
if skipfooter is not None:
@@ -1892,13 +1891,13 @@ def _range2cols(areas):
"""
def _excel2num(x):
"Convert Excel column name like 'AB' to 0-based column index"
- return reduce(lambda s,a: s*26+ord(a)-ord('A')+1, x.upper().strip(), 0)-1
+ return reduce(lambda s, a: s * 26 + ord(a) - ord('A') + 1, x.upper().strip(), 0) - 1
cols = []
for rng in areas.split(','):
if ':' in rng:
rng = rng.split(':')
- cols += range(_excel2num(rng[0]), _excel2num(rng[1])+1)
+ cols += range(_excel2num(rng[0]), _excel2num(rng[1]) + 1)
else:
cols.append(_excel2num(rng))
return cols
@@ -1968,7 +1967,7 @@ def _parse_xls(self, sheetname, header=0, skiprows=None,
if typ == XL_CELL_DATE:
dt = xldate_as_tuple(value, datemode)
# how to produce this first case?
- if dt[0] < datetime.MINYEAR: # pragma: no cover
+ if dt[0] < datetime.MINYEAR: # pragma: no cover
value = datetime.time(*dt[3:])
else:
value = datetime.datetime(*dt)
@@ -2022,6 +2021,7 @@ def to_xls(style_dict, num_format_str=None):
style_dict: style dictionary to convert
"""
import xlwt
+
def style_to_xlwt(item, firstlevel=True, field_sep=',', line_sep=';'):
"""helper wich recursively generate an xlwt easy style string
for example:
@@ -2079,7 +2079,7 @@ def to_xlsx(style_dict):
for nk, nv in value.items():
if key == "borders":
(xls_style.borders.__getattribute__(nk)
- .__setattr__('border_style', nv))
+ .__setattr__('border_style', nv))
else:
xls_style.__getattribute__(key).__setattr__(nk, nv)
@@ -2087,7 +2087,7 @@ def to_xlsx(style_dict):
def _conv_value(val):
- #convert value for excel dump
+ # convert value for excel dump
if isinstance(val, np.int64):
val = int(val)
elif isinstance(val, np.bool8):
@@ -2115,12 +2115,12 @@ def __init__(self, path):
import xlwt
self.book = xlwt.Workbook()
self.fm_datetime = xlwt.easyxf(
- num_format_str='YYYY-MM-DD HH:MM:SS')
+ num_format_str='YYYY-MM-DD HH:MM:SS')
self.fm_date = xlwt.easyxf(num_format_str='YYYY-MM-DD')
else:
from openpyxl.workbook import Workbook
- self.book = Workbook()#optimized_write=True)
- #open pyxl 1.6.1 adds a dummy sheet remove it
+ self.book = Workbook() # optimized_write=True)
+ # open pyxl 1.6.1 adds a dummy sheet remove it
if self.book.worksheets:
self.book.remove_sheet(self.book.worksheets[0])
self.path = path
@@ -2175,15 +2175,15 @@ def _writecells_xlsx(self, cells, sheet_name, startrow, startcol):
style = CellStyleConverter.to_xlsx(cell.style)
for field in style.__fields__:
xcell.style.__setattr__(field,
- style.__getattribute__(field))
+ style.__getattribute__(field))
if isinstance(cell.val, datetime.datetime):
xcell.style.number_format.format_code = "YYYY-MM-DD HH:MM:SS"
elif isinstance(cell.val, datetime.date):
xcell.style.number_format.format_code = "YYYY-MM-DD"
- #merging requires openpyxl latest (works on 1.6.1)
- #todo add version check
+ # merging requires openpyxl latest (works on 1.6.1)
+ # todo add version check
if cell.mergestart is not None and cell.mergeend is not None:
cletterstart = get_column_letter(startcol + cell.col + 1)
cletterend = get_column_letter(startcol + cell.mergeend + 1)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index a4cf413269c49..1469620ea01f2 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -37,7 +37,9 @@
# versioning attribute
_version = '0.10.1'
-class IncompatibilityWarning(Warning): pass
+
+class IncompatibilityWarning(Warning):
+ pass
# reading and writing the full object in one go
_TYPE_MAP = {
@@ -47,7 +49,7 @@ class IncompatibilityWarning(Warning): pass
DataFrame: 'frame',
SparseDataFrame: 'sparse_frame',
Panel: 'wide',
- Panel4D : 'ndim',
+ Panel4D: 'ndim',
SparsePanel: 'sparse_panel'
}
@@ -80,15 +82,16 @@ class IncompatibilityWarning(Warning): pass
# axes map
_AXES_MAP = {
- DataFrame : [0],
- Panel : [1,2],
- Panel4D : [1,2,3],
+ DataFrame: [0],
+ Panel: [1, 2],
+ Panel4D: [1, 2, 3],
}
# oh the troubles to reduce import time
_table_mod = None
_table_supports_index = False
+
def _tables():
global _table_mod
global _table_supports_index
@@ -106,6 +109,7 @@ def _tables():
return _table_mod
+
@contextmanager
def get_store(path, mode='a', complevel=None, complib=None,
fletcher32=False):
@@ -212,7 +216,7 @@ def __contains__(self, key):
node = self.get_node(key)
if node is not None:
name = node._v_pathname
- return re.search(key,name) is not None
+ return re.search(key, name) is not None
return False
def __len__(self):
@@ -223,10 +227,10 @@ def __repr__(self):
groups = self.groups()
if len(groups) > 0:
- keys = []
+ keys = []
values = []
- for n in sorted(groups, key = lambda x: x._v_name):
- kind = getattr(n._v_attrs,'pandas_type',None)
+ for n in sorted(groups, key=lambda x: x._v_name):
+ kind = getattr(n._v_attrs, 'pandas_type', None)
keys.append(str(n._v_pathname))
@@ -253,7 +257,7 @@ def keys(self):
Return a (potentially unordered) list of the keys corresponding to the
objects stored in the HDFStore. These are ABSOLUTE path-names (e.g. have the leading '/'
"""
- return [ n._v_pathname for n in self.groups() ]
+ return [n._v_pathname for n in self.groups()]
def open(self, mode='a', warn=True):
"""
@@ -355,7 +359,7 @@ def select_as_coordinates(self, key, where=None, **kwargs):
-------------------
where : list of Term (or convertable) objects, optional
"""
- return self.get_table(key).read_coordinates(where = where, **kwargs)
+ return self.get_table(key).read_coordinates(where=where, **kwargs)
def unique(self, key, column, **kwargs):
"""
@@ -372,7 +376,7 @@ def unique(self, key, column, **kwargs):
raises ValueError if the column can not be extracted indivually (it is part of a data block)
"""
- return self.get_table(key).read_column(column = column, **kwargs)
+ return self.get_table(key).read_column(column=column, **kwargs)
def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kwargs):
""" Retrieve pandas objects from multiple tables
@@ -389,12 +393,12 @@ def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kw
"""
# default to single select
- if isinstance(keys, (list,tuple)) and len(keys) == 1:
+ if isinstance(keys, (list, tuple)) and len(keys) == 1:
keys = keys[0]
- if isinstance(keys,basestring):
- return self.select(key = keys, where=where, columns = columns, **kwargs)
+ if isinstance(keys, basestring):
+ return self.select(key=keys, where=where, columns=columns, **kwargs)
- if not isinstance(keys, (list,tuple)):
+ if not isinstance(keys, (list, tuple)):
raise Exception("keys must be a list/tuple")
if len(keys) == 0:
@@ -404,7 +408,7 @@ def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kw
selector = keys[0]
# collect the tables
- tbls = [ self.get_table(k) for k in keys ]
+ tbls = [self.get_table(k) for k in keys]
# validate rows
nrows = tbls[0].nrows
@@ -416,13 +420,13 @@ def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kw
c = self.select_as_coordinates(selector, where)
# collect the returns objs
- objs = [ t.read(where = c, columns = columns) for t in tbls ]
+ objs = [t.read(where=c, columns=columns) for t in tbls]
# axis is the concentation axes
- axis = list(set([ t.non_index_axes[0][0] for t in tbls ]))[0]
+ axis = list(set([t.non_index_axes[0][0] for t in tbls]))[0]
# concat and return
- return concat(objs, axis = axis, verify_integrity = True)
+ return concat(objs, axis=axis, verify_integrity=True)
def put(self, key, value, table=False, append=False, **kwargs):
"""
@@ -475,11 +479,11 @@ def remove(self, key, where=None, start=None, stop=None):
if not _is_table_type(group):
raise Exception('can only remove with where on objects written as tables')
t = create_table(self, group)
- return t.delete(where = where, start=start, stop=stop)
+ return t.delete(where=where, start=start, stop=stop)
return None
- def append(self, key, value, columns = None, **kwargs):
+ def append(self, key, value, columns=None, **kwargs):
"""
Append to Table in file. Node must already exist and be Table
format.
@@ -507,16 +511,18 @@ def append(self, key, value, columns = None, **kwargs):
self._write_to_group(key, value, table=True, append=True, **kwargs)
- def append_to_multiple(self, d, value, selector, data_columns = None, axes = None, **kwargs):
+ def append_to_multiple(self, d, value, selector, data_columns=None, axes=None, **kwargs):
"""
Append to multiple tables
Parameters
----------
- d : a dict of table_name to table_columns, None is acceptable as the values of one node (this will get all the remaining columns)
+ d : a dict of table_name to table_columns, None is acceptable as the values of
+ one node (this will get all the remaining columns)
value : a pandas object
- selector : a string that designates the indexable table; all of its columns will be designed as data_columns, unless data_columns is passed,
- in which case these are used
+ selector : a string that designates the indexable table; all of its columns will
+ be designed as data_columns, unless data_columns is passed, in which
+ case these are used
Notes
-----
@@ -533,7 +539,7 @@ def append_to_multiple(self, d, value, selector, data_columns = None, axes = Non
raise Exception("append_to_multiple requires a selector that is in passed dict")
# figure out the splitting axis (the non_index_axis)
- axis = list(set(range(value.ndim))-set(_AXES_MAP[type(value)]))[0]
+ axis = list(set(range(value.ndim)) - set(_AXES_MAP[type(value)]))[0]
# figure out how to split the value
remain_key = None
@@ -547,7 +553,7 @@ def append_to_multiple(self, d, value, selector, data_columns = None, axes = Non
remain_values.extend(v)
if remain_key is not None:
ordered = value.axes[axis]
- ordd = ordered-Index(remain_values)
+ ordd = ordered - Index(remain_values)
ordd = sorted(ordered.get_indexer(ordd))
d[remain_key] = ordered.take(ordd)
@@ -560,9 +566,9 @@ def append_to_multiple(self, d, value, selector, data_columns = None, axes = Non
dc = data_columns if k == selector else None
# compute the val
- val = value.reindex_axis(v, axis = axis, copy = False)
+ val = value.reindex_axis(v, axis=axis, copy=False)
- self.append(k, val, data_columns = dc, **kwargs)
+ self.append(k, val, data_columns=dc, **kwargs)
def create_table_index(self, key, **kwargs):
""" Create a pytables index on the table
@@ -582,7 +588,8 @@ def create_table_index(self, key, **kwargs):
raise Exception("PyTables >= 2.3 is required for table indexing")
group = self.get_node(key)
- if group is None: return
+ if group is None:
+ return
if not _is_table_type(group):
raise Exception("cannot create table index on a non-table")
@@ -590,14 +597,14 @@ def create_table_index(self, key, **kwargs):
def groups(self):
""" return a list of all the groups (that are not themselves a pandas storage object) """
- return [ g for g in self.handle.walkGroups() if getattr(g._v_attrs,'pandas_type',None) ]
+ return [g for g in self.handle.walkGroups() if getattr(g._v_attrs, 'pandas_type', None)]
def get_node(self, key):
""" return the node with the key or None if it does not exist """
try:
if not key.startswith('/'):
key = '/' + key
- return self.handle.getNode(self.root,key)
+ return self.handle.getNode(self.root, key)
except:
return None
@@ -635,7 +642,7 @@ def _write_to_group(self, key, value, table=False, append=False,
group = self.get_node(new_path)
if group is None:
group = self.handle.createGroup(path, p)
- path = new_path
+ path = new_path
kind = _TYPE_MAP[type(value)]
if table or (append and _is_table_type(group)):
@@ -784,10 +791,10 @@ def _read_wide(self, group, where=None, **kwargs):
def _write_ndim_table(self, group, obj, append=False, axes=None, index=True, **kwargs):
if axes is None:
axes = _AXES_MAP[type(obj)]
- t = create_table(self, group, typ = 'appendable_ndim')
+ t = create_table(self, group, typ='appendable_ndim')
t.write(axes=axes, obj=obj, append=append, **kwargs)
if index:
- t.create_index(columns = index)
+ t.create_index(columns=index)
def _read_ndim_table(self, group, where=None, **kwargs):
t = create_table(self, group, **kwargs)
@@ -797,20 +804,20 @@ def _write_frame_table(self, group, df, append=False, axes=None, index=True, **k
if axes is None:
axes = _AXES_MAP[type(df)]
- t = create_table(self, group, typ = 'appendable_frame' if df.index.nlevels == 1 else 'appendable_multiframe')
+ t = create_table(self, group, typ='appendable_frame' if df.index.nlevels == 1 else 'appendable_multiframe')
t.write(axes=axes, obj=df, append=append, **kwargs)
if index:
- t.create_index(columns = index)
+ t.create_index(columns=index)
_read_frame_table = _read_ndim_table
def _write_wide_table(self, group, panel, append=False, axes=None, index=True, **kwargs):
if axes is None:
axes = _AXES_MAP[type(panel)]
- t = create_table(self, group, typ = 'appendable_panel')
+ t = create_table(self, group, typ='appendable_panel')
t.write(axes=axes, obj=panel, append=append, **kwargs)
if index:
- t.create_index(columns = index)
+ t.create_index(columns=index)
_read_wide_table = _read_ndim_table
@@ -1028,6 +1035,7 @@ def _read_index_legacy(self, group, key):
kind = node._v_attrs.kind
return _unconvert_index_legacy(data, kind)
+
class IndexCol(object):
""" an index column description class
@@ -1041,30 +1049,31 @@ class IndexCol(object):
pos : the position in the pytables
"""
- is_an_indexable = True
+ is_an_indexable = True
is_data_indexable = True
- is_searchable = False
+ is_searchable = False
- def __init__(self, values = None, kind = None, typ = None, cname = None, itemsize = None, name = None, axis = None, kind_attr = None, pos = None, **kwargs):
+ def __init__(self, values=None, kind=None, typ=None, cname=None, itemsize=None,
+ name=None, axis=None, kind_attr=None, pos=None, **kwargs):
self.values = values
- self.kind = kind
- self.typ = typ
+ self.kind = kind
+ self.typ = typ
self.itemsize = itemsize
- self.name = name
- self.cname = cname
+ self.name = name
+ self.cname = cname
self.kind_attr = kind_attr
- self.axis = axis
- self.pos = pos
- self.table = None
+ self.axis = axis
+ self.pos = pos
+ self.table = None
if name is not None:
self.set_name(name, kind_attr)
if pos is not None:
self.set_pos(pos)
- def set_name(self, name, kind_attr = None):
+ def set_name(self, name, kind_attr=None):
""" set the name of this indexer """
- self.name = name
+ self.name = name
self.kind_attr = kind_attr or "%s_kind" % name
if self.cname is None:
self.cname = name
@@ -1073,13 +1082,13 @@ def set_name(self, name, kind_attr = None):
def set_axis(self, axis):
""" set the axis over which I index """
- self.axis = axis
+ self.axis = axis
return self
def set_pos(self, pos):
""" set the position of this column in the Table """
- self.pos = pos
+ self.pos = pos
if pos is not None and self.typ is not None:
self.typ._v_pos = pos
return self
@@ -1089,13 +1098,13 @@ def set_table(self, table):
return self
def __repr__(self):
- return "name->%s,cname->%s,axis->%s,pos->%s,kind->%s" % (self.name,self.cname,self.axis,self.pos,self.kind)
+ return "name->%s,cname->%s,axis->%s,pos->%s,kind->%s" % (self.name, self.cname, self.axis, self.pos, self.kind)
__str__ = __repr__
def __eq__(self, other):
""" compare 2 col items """
- return all([ getattr(self,a,None) == getattr(other,a,None) for a in ['name','cname','axis','pos'] ])
+ return all([getattr(self, a, None) == getattr(other, a, None) for a in ['name', 'cname', 'axis', 'pos']])
def __ne__(self, other):
return not self.__eq__(other)
@@ -1136,7 +1145,7 @@ def description(self):
@property
def col(self):
""" return my current col description """
- return getattr(self.description,self.cname,None)
+ return getattr(self.description, self.cname, None)
@property
def cvalues(self):
@@ -1146,7 +1155,7 @@ def cvalues(self):
def __iter__(self):
return iter(self.values)
- def maybe_set_size(self, min_itemsize = None, **kwargs):
+ def maybe_set_size(self, min_itemsize=None, **kwargs):
""" maybe set a string col itemsize:
min_itemsize can be an interger or a dict with this columns name with an integer size """
if self.kind == 'string':
@@ -1155,7 +1164,8 @@ def maybe_set_size(self, min_itemsize = None, **kwargs):
min_itemsize = min_itemsize.get(self.name)
if min_itemsize is not None and self.typ.itemsize < min_itemsize:
- self.typ = _tables().StringCol(itemsize = min_itemsize, pos = self.pos)
+ self.typ = _tables(
+ ).StringCol(itemsize=min_itemsize, pos=self.pos)
def validate_and_set(self, table, append, **kwargs):
self.set_table(table)
@@ -1163,11 +1173,11 @@ def validate_and_set(self, table, append, **kwargs):
self.validate_attr(append)
self.set_attr()
- def validate_col(self, itemsize = None):
+ def validate_col(self, itemsize=None):
""" validate this column: return the compared against itemsize """
# validate this column for string truncation (or reset to the max size)
- dtype = getattr(self,'dtype',None)
+ dtype = getattr(self, 'dtype', None)
if self.kind == 'string':
c = self.col
@@ -1175,27 +1185,28 @@ def validate_col(self, itemsize = None):
if itemsize is None:
itemsize = self.itemsize
if c.itemsize < itemsize:
- raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!" % (self.cname,itemsize,c.itemsize))
+ raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!"
+ % (self.cname, itemsize, c.itemsize))
return c.itemsize
return None
-
def validate_attr(self, append):
# check for backwards incompatibility
if append:
- existing_kind = getattr(self.attrs,self.kind_attr,None)
+ existing_kind = getattr(self.attrs, self.kind_attr, None)
if existing_kind is not None and existing_kind != self.kind:
raise TypeError("incompatible kind in col [%s - %s]" %
(existing_kind, self.kind))
def get_attr(self):
""" set the kind for this colummn """
- self.kind = getattr(self.attrs,self.kind_attr,None)
+ self.kind = getattr(self.attrs, self.kind_attr, None)
def set_attr(self):
""" set the kind for this colummn """
- setattr(self.attrs,self.kind_attr,self.kind)
+ setattr(self.attrs, self.kind_attr, self.kind)
+
class DataCol(IndexCol):
""" a data holding column, by definition this is not indexable
@@ -1206,44 +1217,46 @@ class DataCol(IndexCol):
data : the actual data
cname : the column name in the table to hold the data (typeically values)
"""
- is_an_indexable = False
+ is_an_indexable = False
is_data_indexable = False
- is_searchable = False
+ is_searchable = False
@classmethod
- def create_for_block(cls, i = None, name = None, cname = None, version = None, **kwargs):
+ def create_for_block(cls, i=None, name=None, cname=None, version=None, **kwargs):
""" return a new datacol with the block i """
if cname is None:
cname = name or 'values_block_%d' % i
if name is None:
- name = cname
+ name = cname
- # prior to 0.10.1, we named values blocks like: values_block_0 an the name values_0
+ # prior to 0.10.1, we named values blocks like: values_block_0 an the
+ # name values_0
try:
if version[0] == 0 and version[1] <= 10 and version[2] == 0:
- m = re.search("values_block_(\d+)",name)
+ m = re.search("values_block_(\d+)", name)
if m:
name = "values_%s" % m.groups()[0]
except:
pass
- return cls(name = name, cname = cname, **kwargs)
+ return cls(name=name, cname=cname, **kwargs)
- def __init__(self, values = None, kind = None, typ = None, cname = None, data = None, block = None, **kwargs):
- super(DataCol, self).__init__(values = values, kind = kind, typ = typ, cname = cname, **kwargs)
+ def __init__(self, values=None, kind=None, typ=None, cname=None, data=None, block=None, **kwargs):
+ super(DataCol, self).__init__(
+ values=values, kind=kind, typ=typ, cname=cname, **kwargs)
self.dtype = None
self.dtype_attr = "%s_dtype" % self.name
self.set_data(data)
def __repr__(self):
- return "name->%s,cname->%s,dtype->%s,shape->%s" % (self.name,self.cname,self.dtype,self.shape)
+ return "name->%s,cname->%s,dtype->%s,shape->%s" % (self.name, self.cname, self.dtype, self.shape)
def __eq__(self, other):
""" compare 2 col items """
- return all([ getattr(self,a,None) == getattr(other,a,None) for a in ['name','cname','dtype','pos'] ])
+ return all([getattr(self, a, None) == getattr(other, a, None) for a in ['name', 'cname', 'dtype', 'pos']])
- def set_data(self, data, dtype = None):
+ def set_data(self, data, dtype=None):
self.data = data
if data is not None:
if dtype is not None:
@@ -1273,18 +1286,21 @@ def set_kind(self):
def set_atom(self, block, existing_col, min_itemsize, nan_rep, **kwargs):
""" create and setup my atom from the block b """
- self.values = list(block.items)
- dtype = block.dtype.name
+ self.values = list(block.items)
+ dtype = block.dtype.name
inferred_type = lib.infer_dtype(block.values.flatten())
if inferred_type == 'datetime64':
self.set_atom_datetime64(block)
elif inferred_type == 'date':
- raise NotImplementedError("date is not implemented as a table column")
+ raise NotImplementedError(
+ "date is not implemented as a table column")
elif inferred_type == 'unicode':
- raise NotImplementedError("unicode is not implemented as a table column")
+ raise NotImplementedError(
+ "unicode is not implemented as a table column")
- ### this is basically a catchall; if say a datetime64 has nans then will end up here ###
+ # this is basically a catchall; if say a datetime64 has nans then will
+ # end up here ###
elif inferred_type == 'string' or dtype == 'object':
self.set_atom_string(block, existing_col, min_itemsize, nan_rep)
else:
@@ -1293,7 +1309,7 @@ def set_atom(self, block, existing_col, min_itemsize, nan_rep, **kwargs):
return self
def get_atom_string(self, block, itemsize):
- return _tables().StringCol(itemsize = itemsize, shape = block.shape[0])
+ return _tables().StringCol(itemsize=itemsize, shape=block.shape[0])
def set_atom_string(self, block, existing_col, min_itemsize, nan_rep):
# fill nan items with myself
@@ -1304,8 +1320,9 @@ def set_atom_string(self, block, existing_col, min_itemsize, nan_rep):
# specified min_itemsize?
if isinstance(min_itemsize, dict):
- min_itemsize = int(min_itemsize.get(self.name) or min_itemsize.get('values') or 0)
- itemsize = max(min_itemsize or 0,itemsize)
+ min_itemsize = int(min_itemsize.get(
+ self.name) or min_itemsize.get('values') or 0)
+ itemsize = max(min_itemsize or 0, itemsize)
# check for column in the values conflicts
if existing_col is not None:
@@ -1314,32 +1331,32 @@ def set_atom_string(self, block, existing_col, min_itemsize, nan_rep):
itemsize = eci
self.itemsize = itemsize
- self.kind = 'string'
- self.typ = self.get_atom_string(block, itemsize)
+ self.kind = 'string'
+ self.typ = self.get_atom_string(block, itemsize)
self.set_data(self.convert_string_data(data, itemsize))
def convert_string_data(self, data, itemsize):
return data.astype('S%s' % itemsize)
def get_atom_data(self, block):
- return getattr(_tables(),"%sCol" % self.kind.capitalize())(shape = block.shape[0])
+ return getattr(_tables(), "%sCol" % self.kind.capitalize())(shape=block.shape[0])
def set_atom_data(self, block):
- self.kind = block.dtype.name
- self.typ = self.get_atom_data(block)
+ self.kind = block.dtype.name
+ self.typ = self.get_atom_data(block)
self.set_data(block.values.astype(self.typ._deftype))
def get_atom_datetime64(self, block):
- return _tables().Int64Col(shape = block.shape[0])
+ return _tables().Int64Col(shape=block.shape[0])
def set_atom_datetime64(self, block):
- self.kind = 'datetime64'
- self.typ = self.get_atom_datetime64(block)
- self.set_data(block.values.view('i8'),'datetime64')
+ self.kind = 'datetime64'
+ self.typ = self.get_atom_datetime64(block)
+ self.set_data(block.values.view('i8'), 'datetime64')
@property
def shape(self):
- return getattr(self.data,'shape',None)
+ return getattr(self.data, 'shape', None)
@property
def cvalues(self):
@@ -1351,13 +1368,13 @@ def validate_attr(self, append):
if append:
existing_fields = getattr(self.attrs, self.kind_attr, None)
if (existing_fields is not None and
- existing_fields != list(self.values)):
+ existing_fields != list(self.values)):
raise Exception("appended items do not match existing items"
" in table!")
existing_dtype = getattr(self.attrs, self.dtype_attr, None)
if (existing_dtype is not None and
- existing_dtype != self.dtype):
+ existing_dtype != self.dtype):
raise Exception("appended items dtype do not match existing items dtype"
" in table!")
@@ -1376,10 +1393,12 @@ def convert(self, values, nan_rep):
if self.dtype == 'datetime64':
self.data = np.asarray(self.data, dtype='M8[ns]')
elif self.dtype == 'date':
- self.data = np.array([date.fromtimestamp(v) for v in self.data], dtype=object)
+ self.data = np.array(
+ [date.fromtimestamp(v) for v in self.data], dtype=object)
elif self.dtype == 'datetime':
- self.data = np.array([datetime.fromtimestamp(v) for v in self.data],
- dtype=object)
+ self.data = np.array(
+ [datetime.fromtimestamp(v) for v in self.data],
+ dtype=object)
else:
try:
@@ -1389,20 +1408,22 @@ def convert(self, values, nan_rep):
# convert nans
if self.kind == 'string':
- self.data = lib.array_replace_from_nan_rep(self.data.flatten(), nan_rep).reshape(self.data.shape)
+ self.data = lib.array_replace_from_nan_rep(
+ self.data.flatten(), nan_rep).reshape(self.data.shape)
return self
def get_attr(self):
""" get the data for this colummn """
- self.values = getattr(self.attrs,self.kind_attr,None)
- self.dtype = getattr(self.attrs,self.dtype_attr,None)
+ self.values = getattr(self.attrs, self.kind_attr, None)
+ self.dtype = getattr(self.attrs, self.dtype_attr, None)
self.set_kind()
def set_attr(self):
""" set the data for this colummn """
- setattr(self.attrs,self.kind_attr,self.values)
+ setattr(self.attrs, self.kind_attr, self.values)
if self.dtype is not None:
- setattr(self.attrs,self.dtype_attr,self.dtype)
+ setattr(self.attrs, self.dtype_attr, self.dtype)
+
class DataIndexableCol(DataCol):
""" represent a data column that can be indexed """
@@ -1413,14 +1434,15 @@ def is_searchable(self):
return self.kind == 'string'
def get_atom_string(self, block, itemsize):
- return _tables().StringCol(itemsize = itemsize)
+ return _tables().StringCol(itemsize=itemsize)
def get_atom_data(self, block):
- return getattr(_tables(),"%sCol" % self.kind.capitalize())()
+ return getattr(_tables(), "%sCol" % self.kind.capitalize())()
def get_atom_datetime64(self, block):
return _tables().Int64Col()
+
class Table(object):
""" represent a table:
facilitate read/write of various types of tables
@@ -1446,29 +1468,29 @@ class Table(object):
"""
table_type = None
- obj_type = None
- ndim = None
- levels = 1
+ obj_type = None
+ ndim = None
+ levels = 1
def __init__(self, parent, group, **kwargs):
- self.parent = parent
- self.group = group
+ self.parent = parent
+ self.group = group
# compute our version
- version = getattr(group._v_attrs,'pandas_version',None)
+ version = getattr(group._v_attrs, 'pandas_version', None)
try:
- self.version = tuple([ int(x) for x in version.split('.') ])
+ self.version = tuple([int(x) for x in version.split('.')])
if len(self.version) == 2:
self.version = self.version + (0,)
except:
- self.version = (0,0,0)
+ self.version = (0, 0, 0)
- self.index_axes = []
+ self.index_axes = []
self.non_index_axes = []
- self.values_axes = []
- self.data_columns = []
- self.nan_rep = None
- self.selection = None
+ self.values_axes = []
+ self.data_columns = []
+ self.nan_rep = None
+ self.selection = None
@property
def table_type_short(self):
@@ -1476,17 +1498,18 @@ def table_type_short(self):
@property
def pandas_type(self):
- return getattr(self.group._v_attrs,'pandas_type',None)
+ return getattr(self.group._v_attrs, 'pandas_type', None)
def __repr__(self):
""" return a pretty representatgion of myself """
self.infer_axes()
- dc = ",dc->[%s]" % ','.join(self.data_columns) if len(self.data_columns) else ''
+ dc = ",dc->[%s]" % ','.join(
+ self.data_columns) if len(self.data_columns) else ''
return "%s (typ->%s,nrows->%s,indexers->[%s]%s)" % (self.pandas_type,
- self.table_type_short,
- self.nrows,
- ','.join([ a.name for a in self.index_axes ]),
- dc)
+ self.table_type_short,
+ self.nrows,
+ ','.join([a.name for a in self.index_axes]),
+ dc)
__str__ = __repr__
@@ -1496,24 +1519,26 @@ def copy(self):
def validate(self, other):
""" validate against an existing table """
- if other is None: return
+ if other is None:
+ return
if other.table_type != self.table_type:
raise TypeError("incompatible table_type with existing [%s - %s]" %
(other.table_type, self.table_type))
- for c in ['index_axes','non_index_axes','values_axes']:
- if getattr(self,c,None) != getattr(other,c,None):
- raise Exception("invalid combinate of [%s] on appending data [%s] vs current table [%s]" % (c,getattr(self,c,None),getattr(other,c,None)))
+ for c in ['index_axes', 'non_index_axes', 'values_axes']:
+ if getattr(self, c, None) != getattr(other, c, None):
+ raise Exception("invalid combinate of [%s] on appending data [%s] vs current table [%s]"
+ % (c, getattr(self, c, None), getattr(other, c, None)))
@property
def nrows(self):
- return getattr(self.table,'nrows',None)
+ return getattr(self.table, 'nrows', None)
@property
def nrows_expected(self):
""" based on our axes, compute the expected nrows """
- return np.prod([ i.cvalues.shape[0] for i in self.index_axes ])
+ return np.prod([i.cvalues.shape[0] for i in self.index_axes])
@property
def table(self):
@@ -1563,66 +1588,69 @@ def is_transposed(self):
@property
def data_orientation(self):
""" return a tuple of my permutated axes, non_indexable at the front """
- return tuple(itertools.chain([ a[0] for a in self.non_index_axes ], [ a.axis for a in self.index_axes ]))
+ return tuple(itertools.chain([a[0] for a in self.non_index_axes], [a.axis for a in self.index_axes]))
def queryables(self):
""" return a dict of the kinds allowable columns for this object """
# compute the values_axes queryables
- return dict([ (a.cname,a.kind) for a in self.index_axes ] +
- [ (self.obj_type._AXIS_NAMES[axis],None) for axis, values in self.non_index_axes ] +
- [ (v.cname,v.kind) for v in self.values_axes if v.name in set(self.data_columns) ]
+ return dict([(a.cname, a.kind) for a in self.index_axes] +
+ [(self.obj_type._AXIS_NAMES[axis], None) for axis, values in self.non_index_axes] +
+ [(v.cname, v.kind) for v in self.values_axes if v.name in set(self.data_columns)]
)
def index_cols(self):
""" return a list of my index cols """
- return [ (i.axis,i.cname) for i in self.index_axes ]
+ return [(i.axis, i.cname) for i in self.index_axes]
def values_cols(self):
""" return a list of my values cols """
- return [ i.cname for i in self.values_axes ]
+ return [i.cname for i in self.values_axes]
def set_attrs(self):
""" set our table type & indexables """
- self.attrs.table_type = self.table_type
- self.attrs.index_cols = self.index_cols()
+ self.attrs.table_type = self.table_type
+ self.attrs.index_cols = self.index_cols()
self.attrs.values_cols = self.values_cols()
self.attrs.non_index_axes = self.non_index_axes
- self.attrs.data_columns = self.data_columns
- self.attrs.nan_rep = self.nan_rep
- self.attrs.levels = self.levels
+ self.attrs.data_columns = self.data_columns
+ self.attrs.nan_rep = self.nan_rep
+ self.attrs.levels = self.levels
- def validate_version(self, where = None):
+ def validate_version(self, where=None):
""" are we trying to operate on an old version? """
if where is not None:
if self.version[0] <= 0 and self.version[1] <= 10 and self.version[2] < 1:
- warnings.warn("where criteria is being ignored as we this version is too old (or not-defined) [%s]" % '.'.join([ str(x) for x in self.version ]), IncompatibilityWarning)
+ warnings.warn("where criteria is being ignored as we this version is too old (or not-defined) [%s]"
+ % '.'.join([str(x) for x in self.version]), IncompatibilityWarning)
@property
def indexables(self):
""" create/cache the indexables if they don't exist """
if self._indexables is None:
- d = self.description
+ d = self.description
self._indexables = []
# index columns
- self._indexables.extend([ IndexCol(name = name, axis = axis, pos = i) for i, (axis, name) in enumerate(self.attrs.index_cols) ])
+ self._indexables.extend([IndexCol(name=name, axis=axis, pos=i) for i, (axis, name) in enumerate(self.attrs.index_cols)])
# values columns
dc = set(self.data_columns)
base_pos = len(self._indexables)
+
def f(i, c):
klass = DataCol
if c in dc:
klass = DataIndexableCol
- return klass.create_for_block(i = i, name = c, pos = base_pos + i, version = self.version)
+ return klass.create_for_block(i=i, name=c, pos=base_pos + i, version=self.version)
- self._indexables.extend([ f(i,c) for i, c in enumerate(self.attrs.values_cols) ])
+ self._indexables.extend(
+ [f(i, c) for i, c in enumerate(self.attrs.values_cols)])
return self._indexables
- def create_index(self, columns = None, optlevel = None, kind = None):
+ def create_index(self, columns=None, optlevel=None, kind=None):
"""
Create a pytables index on the specified columns
note: cannot index Time64Col() currently; PyTables must be >= 2.3
@@ -1640,31 +1668,33 @@ def create_index(self, columns = None, optlevel = None, kind = None):
"""
- if not self.infer_axes(): return
- if columns is False: return
+ if not self.infer_axes():
+ return
+ if columns is False:
+ return
# index all indexables and data_columns
if columns is None or columns is True:
- columns = [ a.cname for a in self.axes if a.is_data_indexable ]
- if not isinstance(columns, (tuple,list)):
- columns = [ columns ]
+ columns = [a.cname for a in self.axes if a.is_data_indexable]
+ if not isinstance(columns, (tuple, list)):
+ columns = [columns]
kw = dict()
if optlevel is not None:
kw['optlevel'] = optlevel
if kind is not None:
- kw['kind'] = kind
+ kw['kind'] = kind
table = self.table
for c in columns:
- v = getattr(table.cols,c,None)
+ v = getattr(table.cols, c, None)
if v is not None:
# remove the index if the kind/optlevel have changed
if v.is_indexed:
index = v.index
cur_optlevel = index.optlevel
- cur_kind = index.kind
+ cur_kind = index.kind
if kind is not None and cur_kind != kind:
v.removeIndex()
@@ -1687,15 +1717,16 @@ def read_axes(self, where, **kwargs):
self.validate_version(where)
# infer the data kind
- if not self.infer_axes(): return False
+ if not self.infer_axes():
+ return False
# create the selection
- self.selection = Selection(self, where = where, **kwargs)
+ self.selection = Selection(self, where=where, **kwargs)
values = self.selection.select()
# convert the data
for a in self.axes:
- a.convert(values, nan_rep = self.nan_rep)
+ a.convert(values, nan_rep=self.nan_rep)
return True
@@ -1707,19 +1738,24 @@ def infer_axes(self):
if table is None:
return False
- self.non_index_axes = getattr(self.attrs,'non_index_axes',None) or []
- self.data_columns = getattr(self.attrs,'data_columns',None) or []
- self.nan_rep = getattr(self.attrs,'nan_rep',None)
- self.levels = getattr(self.attrs,'levels',None) or []
- self.index_axes = [ a.infer(self.table) for a in self.indexables if a.is_an_indexable ]
- self.values_axes = [ a.infer(self.table) for a in self.indexables if not a.is_an_indexable ]
+ self.non_index_axes = getattr(
+ self.attrs, 'non_index_axes', None) or []
+ self.data_columns = getattr(
+ self.attrs, 'data_columns', None) or []
+ self.nan_rep = getattr(self.attrs, 'nan_rep', None)
+ self.levels = getattr(
+ self.attrs, 'levels', None) or []
+ self.index_axes = [a.infer(
+ self.table) for a in self.indexables if a.is_an_indexable]
+ self.values_axes = [a.infer(
+ self.table) for a in self.indexables if not a.is_an_indexable]
return True
def get_object(self, obj):
""" return the data for this obj """
return obj
- def create_axes(self, axes, obj, validate = True, nan_rep = None, data_columns = None, min_itemsize = None, **kwargs):
+ def create_axes(self, axes, obj, validate=True, nan_rep=None, data_columns=None, min_itemsize=None, **kwargs):
""" create and return the axes
leagcy tables create an indexable column, indexable index, non-indexable fields
@@ -1735,24 +1771,24 @@ def create_axes(self, axes, obj, validate = True, nan_rep = None, data_columns =
"""
# map axes to numbers
- axes = [ obj._get_axis_number(a) for a in axes ]
+ axes = [obj._get_axis_number(a) for a in axes]
# do we have an existing table (if so, use its axes & data_columns)
if self.infer_axes():
existing_table = self.copy()
- axes = [ a.axis for a in existing_table.index_axes]
+ axes = [a.axis for a in existing_table.index_axes]
data_columns = existing_table.data_columns
- nan_rep = existing_table.nan_rep
+ nan_rep = existing_table.nan_rep
else:
existing_table = None
# currently support on ndim-1 axes
- if len(axes) != self.ndim-1:
+ if len(axes) != self.ndim - 1:
raise Exception("currenctly only support ndim-1 indexers in an AppendableTable")
# create according to the new data
- self.non_index_axes = []
- self.data_columns = []
+ self.non_index_axes = []
+ self.data_columns = []
# nan_representation
if nan_rep is None:
@@ -1771,10 +1807,12 @@ def create_axes(self, axes, obj, validate = True, nan_rep = None, data_columns =
if i in axes:
name = obj._AXIS_NAMES[i]
- index_axes_map[i] = _convert_index(a).set_name(name).set_axis(i)
+ index_axes_map[i] = _convert_index(
+ a).set_name(name).set_axis(i)
else:
- # we might be able to change the axes on the appending data if necessary
+ # we might be able to change the axes on the appending data if
+ # necessary
append_axis = list(a)
if existing_table is not None:
indexer = len(self.non_index_axes)
@@ -1785,33 +1823,36 @@ def create_axes(self, axes, obj, validate = True, nan_rep = None, data_columns =
if sorted(append_axis) == sorted(exist_axis):
append_axis = exist_axis
- self.non_index_axes.append((i,append_axis))
+ self.non_index_axes.append((i, append_axis))
# set axis positions (based on the axes)
- self.index_axes = [ index_axes_map[a].set_pos(j) for j, a in enumerate(axes) ]
+ self.index_axes = [index_axes_map[a].set_pos(j) for j,
+ a in enumerate(axes)]
j = len(self.index_axes)
# check for column conflicts
if validate:
for a in self.axes:
- a.maybe_set_size(min_itemsize = min_itemsize)
+ a.maybe_set_size(min_itemsize=min_itemsize)
# reindex by our non_index_axes & compute data_columns
for a in self.non_index_axes:
- obj = obj.reindex_axis(a[1], axis = a[0], copy = False)
+ obj = obj.reindex_axis(a[1], axis=a[0], copy=False)
# get out blocks
block_obj = self.get_object(obj)
- blocks = None
+ blocks = None
if data_columns is not None and len(self.non_index_axes):
- axis = self.non_index_axes[0][0]
- axis_labels = self.non_index_axes[0][1]
- data_columns = [ c for c in data_columns if c in axis_labels ]
+ axis = self.non_index_axes[0][0]
+ axis_labels = self.non_index_axes[0][1]
+ data_columns = [c for c in data_columns if c in axis_labels]
if len(data_columns):
- blocks = block_obj.reindex_axis(Index(axis_labels)-Index(data_columns), axis = axis, copy = False)._data.blocks
+ blocks = block_obj.reindex_axis(Index(axis_labels) - Index(
+ data_columns), axis=axis, copy=False)._data.blocks
for c in data_columns:
- blocks.extend(block_obj.reindex_axis([ c ], axis = axis, copy = False)._data.blocks)
+ blocks.extend(block_obj.reindex_axis(
+ [c], axis=axis, copy=False)._data.blocks)
if blocks is None:
blocks = block_obj._data.blocks
@@ -1821,23 +1862,25 @@ def create_axes(self, axes, obj, validate = True, nan_rep = None, data_columns =
for i, b in enumerate(blocks):
# shape of the data column are the indexable axes
- klass = DataCol
- name = None
+ klass = DataCol
+ name = None
# we have a data_column
if data_columns and len(b.items) == 1 and b.items[0] in data_columns:
klass = DataIndexableCol
- name = b.items[0]
+ name = b.items[0]
self.data_columns.append(name)
try:
- existing_col = existing_table.values_axes[i] if existing_table is not None and validate else None
-
- col = klass.create_for_block(i = i, name = name, version = self.version)
- col.set_atom(block = b,
- existing_col = existing_col,
- min_itemsize = min_itemsize,
- nan_rep = nan_rep,
+ existing_col = existing_table.values_axes[
+ i] if existing_table is not None and validate else None
+
+ col = klass.create_for_block(
+ i=i, name=name, version=self.version)
+ col.set_atom(block=b,
+ existing_col=existing_col,
+ min_itemsize=min_itemsize,
+ nan_rep=nan_rep,
**kwargs)
col.set_pos(j)
@@ -1845,7 +1888,7 @@ def create_axes(self, axes, obj, validate = True, nan_rep = None, data_columns =
except (NotImplementedError):
raise
except (Exception), detail:
- raise Exception("cannot find the correct atom type -> [dtype->%s] %s" % (b.dtype.name,str(detail)))
+ raise Exception("cannot find the correct atom type -> [dtype->%s] %s" % (b.dtype.name, str(detail)))
j += 1
# validate the axes if we have an existing table
@@ -1856,40 +1899,41 @@ def process_axes(self, obj, columns=None):
""" process axes filters """
# reorder by any non_index_axes & limit to the select columns
- for axis,labels in self.non_index_axes:
+ for axis, labels in self.non_index_axes:
if columns is not None:
labels = Index(labels) & Index(columns)
- obj = obj.reindex_axis(labels,axis=axis,copy=False)
+ obj = obj.reindex_axis(labels, axis=axis, copy=False)
def reindex(obj, axis, filt, ordered):
ordd = ordered & filt
ordd = sorted(ordered.get_indexer(ordd))
- return obj.reindex_axis(ordered.take(ordd), axis = obj._get_axis_number(axis), copy = False)
+ return obj.reindex_axis(ordered.take(ordd), axis=obj._get_axis_number(axis), copy=False)
# apply the selection filters (but keep in the same order)
if self.selection.filter:
for axis, filt in self.selection.filter:
- obj = reindex(obj, axis, filt, getattr(obj,obj._get_axis_name(axis)))
+ obj = reindex(
+ obj, axis, filt, getattr(obj, obj._get_axis_name(axis)))
return obj
- def create_description(self, complib = None, complevel = None, fletcher32 = False, expectedrows = None):
+ def create_description(self, complib=None, complevel=None, fletcher32=False, expectedrows=None):
""" create the description of the table from the axes & values """
# expected rows estimate
if expectedrows is None:
- expectedrows = max(self.nrows_expected,10000)
- d = dict( name = 'table', expectedrows = expectedrows )
+ expectedrows = max(self.nrows_expected, 10000)
+ d = dict(name='table', expectedrows=expectedrows)
# description from the axes & values
- d['description'] = dict([ (a.cname,a.typ) for a in self.axes ])
+ d['description'] = dict([(a.cname, a.typ) for a in self.axes])
if complib:
if complevel is None:
complevel = self.complevel or 9
- filters = _tables().Filters(complevel = complevel,
- complib = complib,
- fletcher32 = fletcher32 or self.fletcher32)
+ filters = _tables().Filters(complevel=complevel,
+ complib=complib,
+ fletcher32=fletcher32 or self.fletcher32)
d['filters'] = filters
elif self.filters is not None:
d['filters'] = self.filters
@@ -1897,7 +1941,8 @@ def create_description(self, complib = None, complevel = None, fletcher32 = Fals
return d
def read(self, **kwargs):
- raise NotImplementedError("cannot read on an abstract table: subclasses should implement")
+ raise NotImplementedError(
+ "cannot read on an abstract table: subclasses should implement")
def read_coordinates(self, where=None, **kwargs):
""" select coordinates (row numbers) from a table; return the coordinates object """
@@ -1906,11 +1951,12 @@ def read_coordinates(self, where=None, **kwargs):
self.validate_version(where)
# infer the data kind
- if not self.infer_axes(): return False
+ if not self.infer_axes():
+ return False
# create the selection
- self.selection = Selection(self, where = where, **kwargs)
- return Coordinates(self.selection.select_coords(), group = self.group, where = where)
+ self.selection = Selection(self, where=where, **kwargs)
+ return Coordinates(self.selection.select_coords(), group=self.group, where=where)
def read_column(self, column, **kwargs):
""" return a single column from the table, generally only indexables are interesting """
@@ -1919,7 +1965,8 @@ def read_column(self, column, **kwargs):
self.validate_version()
# infer the data kind
- if not self.infer_axes(): return False
+ if not self.infer_axes():
+ return False
# find the axes
for a in self.axes:
@@ -1929,15 +1976,15 @@ def read_column(self, column, **kwargs):
raise ValueError("column [%s] can not be extracted individually; it is not data indexable" % column)
# column must be an indexable or a data column
- c = getattr(self.table.cols,column)
- return Categorical.from_array(a.convert(c[:], nan_rep = self.nan_rep).take_data()).levels
+ c = getattr(self.table.cols, column)
+ return Categorical.from_array(a.convert(c[:], nan_rep=self.nan_rep).take_data()).levels
raise KeyError("column [%s] not found in the table" % column)
def write(self, **kwargs):
raise NotImplementedError("cannot write on an abstract table")
- def delete(self, where = None, **kwargs):
+ def delete(self, where=None, **kwargs):
""" support fully deleting the node in its entirety (only) - where specification must be None """
if where is None:
self.handle.removeNode(self.group, recursive=True)
@@ -1945,6 +1992,7 @@ def delete(self, where = None, **kwargs):
raise NotImplementedError("cannot delete on an abstract table")
+
class WORMTable(Table):
""" a write-once read-many table: this format DOES NOT ALLOW appending to a
table. writing is a one-time operation the data are stored in a format
@@ -1963,6 +2011,7 @@ def write(self, **kwargs):
(e.g. a CArray) create an indexing table so that we can search"""
raise NotImplementedError("WORKTable needs to implement write")
+
class LegacyTable(Table):
""" an appendable table: allow append/query/delete operations to a
(possibily) already existing appendable table this table ALLOWS
@@ -1970,11 +2019,12 @@ class LegacyTable(Table):
that can be easily searched
"""
- _indexables = [IndexCol(name = 'index', axis = 1, pos = 0),
- IndexCol(name = 'column', axis = 2, pos = 1, index_kind = 'columns_kind'),
- DataCol( name = 'fields', cname = 'values', kind_attr = 'fields', pos = 2) ]
+ _indexables = [IndexCol(name='index', axis=1, pos=0),
+ IndexCol(name='column', axis=2,
+ pos=1, index_kind='columns_kind'),
+ DataCol(name='fields', cname='values', kind_attr='fields', pos=2)]
table_type = 'legacy'
- ndim = 3
+ ndim = 3
def write(self, **kwargs):
raise Exception("write operations are not allowed on legacy tables!")
@@ -1982,21 +2032,22 @@ def write(self, **kwargs):
def read(self, where=None, columns=None, **kwargs):
""" we have n indexable columns, with an arbitrary number of data axes """
+ if not self.read_axes(where=where, **kwargs):
+ return None
- if not self.read_axes(where=where, **kwargs): return None
-
- factors = [ Categorical.from_array(a.values) for a in self.index_axes ]
- levels = [ f.levels for f in factors ]
- N = [ len(f.levels) for f in factors ]
- labels = [ f.labels for f in factors ]
+ factors = [Categorical.from_array(a.values) for a in self.index_axes]
+ levels = [f.levels for f in factors]
+ N = [len(f.levels) for f in factors]
+ labels = [f.labels for f in factors]
# compute the key
- key = factor_indexer(N[1:], labels)
+ key = factor_indexer(N[1:], labels)
objs = []
if len(unique(key)) == len(key):
- sorter, _ = algos.groupsort_indexer(com._ensure_int64(key), np.prod(N))
+ sorter, _ = algos.groupsort_indexer(
+ com._ensure_int64(key), np.prod(N))
sorter = com._ensure_platform_int(sorter)
# create the objs
@@ -2005,9 +2056,10 @@ def read(self, where=None, columns=None, **kwargs):
# the data need to be sorted
sorted_values = c.take_data().take(sorter, axis=0)
- take_labels = [ l.take(sorter) for l in labels ]
- items = Index(c.values)
- block = block2d_to_blocknd(sorted_values, items, tuple(N), take_labels)
+ take_labels = [l.take(sorter) for l in labels]
+ items = Index(c.values)
+ block = block2d_to_blocknd(
+ sorted_values, items, tuple(N), take_labels)
# create the object
mgr = BlockManager([block], [items] + levels)
@@ -2015,7 +2067,8 @@ def read(self, where=None, columns=None, **kwargs):
# permute if needed
if self.is_transposed:
- obj = obj.transpose(*tuple(Series(self.data_orientation).argsort()))
+ obj = obj.transpose(
+ *tuple(Series(self.data_orientation).argsort()))
objs.append(obj)
@@ -2025,8 +2078,8 @@ def read(self, where=None, columns=None, **kwargs):
'appended')
# reconstruct
- long_index = MultiIndex.from_arrays([ i.values for i in self.index_axes ])
-
+ long_index = MultiIndex.from_arrays(
+ [i.values for i in self.index_axes])
for c in self.values_axes:
lp = DataFrame(c.data, index=long_index, columns=c.values)
@@ -2050,24 +2103,28 @@ def read(self, where=None, columns=None, **kwargs):
if len(objs) == 1:
wp = objs[0]
else:
- wp = concat(objs, axis = 0, verify_integrity = True)
+ wp = concat(objs, axis=0, verify_integrity=True)
# apply the selection filters & axis orderings
wp = self.process_axes(wp, columns=columns)
return wp
+
class LegacyFrameTable(LegacyTable):
""" support the legacy frame table """
table_type = 'legacy_frame'
- obj_type = Panel
+ obj_type = Panel
+
def read(self, *args, **kwargs):
return super(LegacyFrameTable, self).read(*args, **kwargs)['value']
+
class LegacyPanelTable(LegacyTable):
""" support the legacy panel table """
table_type = 'legacy_panel'
- obj_type = Panel
+ obj_type = Panel
+
class AppendableTable(LegacyTable):
""" suppor the new appendable table formats """
@@ -2075,8 +2132,8 @@ class AppendableTable(LegacyTable):
table_type = 'appendable'
def write(self, axes, obj, append=False, complib=None,
- complevel=None, fletcher32=None, min_itemsize = None, chunksize = 50000,
- expectedrows = None, **kwargs):
+ complevel=None, fletcher32=None, min_itemsize=None, chunksize=50000,
+ expectedrows=None, **kwargs):
# create the table if it doesn't exist (or get it if it does)
if not append:
@@ -2084,15 +2141,16 @@ def write(self, axes, obj, append=False, complib=None,
self.handle.removeNode(self.group, 'table')
# create the axes
- self.create_axes(axes = axes, obj = obj, validate = append, min_itemsize = min_itemsize, **kwargs)
+ self.create_axes(axes=axes, obj=obj, validate=append,
+ min_itemsize=min_itemsize, **kwargs)
if 'table' not in self.group:
# create the table
- options = self.create_description(complib = complib,
- complevel = complevel,
- fletcher32 = fletcher32,
- expectedrows = expectedrows)
+ options = self.create_description(complib=complib,
+ complevel=complevel,
+ fletcher32=fletcher32,
+ expectedrows=expectedrows)
# set the table attributes
self.set_attrs()
@@ -2114,10 +2172,11 @@ def write_data(self, chunksize):
""" fast writing of data: requires specific cython routines each axis shape """
# create the masks & values
- masks = []
+ masks = []
for a in self.values_axes:
- # figure the mask: only do if we can successfully process this column, otherwise ignore the mask
+ # figure the mask: only do if we can successfully process this
+ # column, otherwise ignore the mask
mask = com.isnull(a.data).all(axis=0)
masks.append(mask.astype('u1'))
@@ -2127,29 +2186,31 @@ def write_data(self, chunksize):
m = mask & m
# the arguments
- indexes = [ a.cvalues for a in self.index_axes ]
- search = np.array([ a.is_searchable for a in self.values_axes ]).astype('u1')
- values = [ a.take_data() for a in self.values_axes ]
+ indexes = [a.cvalues for a in self.index_axes]
+ search = np.array(
+ [a.is_searchable for a in self.values_axes]).astype('u1')
+ values = [a.take_data() for a in self.values_axes]
# write the chunks
- rows = self.nrows_expected
+ rows = self.nrows_expected
chunks = int(rows / chunksize) + 1
for i in xrange(chunks):
- start_i = i*chunksize
- end_i = min((i+1)*chunksize,rows)
+ start_i = i * chunksize
+ end_i = min((i + 1) * chunksize, rows)
- self.write_data_chunk(indexes = [ a[start_i:end_i] for a in indexes ],
- mask = mask[start_i:end_i],
- search = search,
- values = [ v[:,start_i:end_i] for v in values ])
+ self.write_data_chunk(
+ indexes=[a[start_i:end_i] for a in indexes],
+ mask=mask[start_i:end_i],
+ search=search,
+ values=[v[:, start_i:end_i] for v in values])
def write_data_chunk(self, indexes, mask, search, values):
# get our function
try:
- func = getattr(lib,"create_hdf_rows_%sd" % self.ndim)
+ func = getattr(lib, "create_hdf_rows_%sd" % self.ndim)
args = list(indexes)
- args.extend([ mask, search, values ])
+ args.extend([mask, search, values])
rows = func(*args)
except (Exception), detail:
raise Exception("cannot create row-data -> %s" % str(detail))
@@ -2159,10 +2220,12 @@ def write_data_chunk(self, indexes, mask, search, values):
self.table.append(rows)
self.table.flush()
except (Exception), detail:
- import pdb; pdb.set_trace()
- raise Exception("tables cannot write this data -> %s" % str(detail))
+ import pdb
+ pdb.set_trace()
+ raise Exception(
+ "tables cannot write this data -> %s" % str(detail))
- def delete(self, where = None, **kwargs):
+ def delete(self, where=None, **kwargs):
# delete all rows (and return the nrows)
if where is None or not len(where):
@@ -2171,7 +2234,8 @@ def delete(self, where = None, **kwargs):
return nrows
# infer the data kind
- if not self.infer_axes(): return None
+ if not self.infer_axes():
+ return None
# create the selection
table = self.table
@@ -2179,14 +2243,14 @@ def delete(self, where = None, **kwargs):
values = self.selection.select_coords()
# delete the rows in reverse order
- l = Series(values).order()
+ l = Series(values).order()
ln = len(l)
if ln:
# construct groups of consecutive rows
- diff = l.diff()
- groups = list(diff[diff>1].index)
+ diff = l.diff()
+ groups = list(diff[diff > 1].index)
# 1 group
if not len(groups):
@@ -2198,13 +2262,14 @@ def delete(self, where = None, **kwargs):
# initial element
if groups[0] != 0:
- groups.insert(0,0)
+ groups.insert(0, 0)
# we must remove in reverse order!
pg = groups.pop()
for g in reversed(groups):
- rows = l.take(range(g,pg))
- table.removeRows(start = rows[rows.index[0]], stop = rows[rows.index[-1]]+1)
+ rows = l.take(range(g, pg))
+ table.removeRows(start=rows[rows.index[0]
+ ], stop=rows[rows.index[-1]] + 1)
pg = g
self.table.flush()
@@ -2212,11 +2277,12 @@ def delete(self, where = None, **kwargs):
# return the number of rows removed
return ln
+
class AppendableFrameTable(AppendableTable):
""" suppor the new appendable table formats """
table_type = 'appendable_frame'
- ndim = 2
- obj_type = DataFrame
+ ndim = 2
+ obj_type = DataFrame
@property
def is_transposed(self):
@@ -2230,70 +2296,72 @@ def get_object(self, obj):
def read(self, where=None, columns=None, **kwargs):
- if not self.read_axes(where=where, **kwargs): return None
+ if not self.read_axes(where=where, **kwargs):
+ return None
- index = self.index_axes[0].values
- frames = []
+ index = self.index_axes[0].values
+ frames = []
for a in self.values_axes:
cols = Index(a.values)
if self.is_transposed:
- values = a.cvalues
- index_ = cols
- cols_ = Index(index)
+ values = a.cvalues
+ index_ = cols
+ cols_ = Index(index)
else:
- values = a.cvalues.T
- index_ = Index(index)
- cols_ = cols
-
+ values = a.cvalues.T
+ index_ = Index(index)
+ cols_ = cols
# if we have a DataIndexableCol, its shape will only be 1 dim
if values.ndim == 1:
- values = values.reshape(1,values.shape[0])
+ values = values.reshape(1, values.shape[0])
- block = make_block(values, cols_, cols_)
- mgr = BlockManager([ block ], [ cols_, index_ ])
+ block = make_block(values, cols_, cols_)
+ mgr = BlockManager([block], [cols_, index_])
frames.append(DataFrame(mgr))
if len(frames) == 1:
df = frames[0]
else:
- df = concat(frames, axis = 1, verify_integrity = True)
+ df = concat(frames, axis=1, verify_integrity=True)
# apply the selection filters & axis orderings
df = self.process_axes(df, columns=columns)
return df
+
class AppendableMultiFrameTable(AppendableFrameTable):
""" a frame with a multi-index """
table_type = 'appendable_multiframe'
- obj_type = DataFrame
- ndim = 2
+ obj_type = DataFrame
+ ndim = 2
@property
def table_type_short(self):
return 'appendable_multi'
- def write(self, obj, data_columns = None, **kwargs):
+ def write(self, obj, data_columns=None, **kwargs):
if data_columns is None:
data_columns = []
for n in obj.index.names:
if n not in data_columns:
- data_columns.insert(0,n)
+ data_columns.insert(0, n)
self.levels = obj.index.names
- return super(AppendableMultiFrameTable, self).write(obj = obj.reset_index(), data_columns = data_columns, **kwargs)
+ return super(AppendableMultiFrameTable, self).write(obj=obj.reset_index(), data_columns=data_columns, **kwargs)
def read(self, *args, **kwargs):
df = super(AppendableMultiFrameTable, self).read(*args, **kwargs)
df.set_index(self.levels, inplace=True)
return df
+
class AppendablePanelTable(AppendableTable):
""" suppor the new appendable table formats """
table_type = 'appendable_panel'
- ndim = 3
- obj_type = Panel
+ ndim = 3
+ obj_type = Panel
def get_object(self, obj):
""" these are written transposed """
@@ -2305,29 +2373,31 @@ def get_object(self, obj):
def is_transposed(self):
return self.data_orientation != tuple(range(self.ndim))
+
class AppendableNDimTable(AppendablePanelTable):
""" suppor the new appendable table formats """
table_type = 'appendable_ndim'
- ndim = 4
- obj_type = Panel4D
+ ndim = 4
+ obj_type = Panel4D
# table maps
_TABLE_MAP = {
- 'appendable_frame' : AppendableFrameTable,
- 'appendable_multiframe' : AppendableMultiFrameTable,
- 'appendable_panel' : AppendablePanelTable,
- 'appendable_ndim' : AppendableNDimTable,
- 'worm' : WORMTable,
- 'legacy_frame' : LegacyFrameTable,
- 'legacy_panel' : LegacyPanelTable,
- 'default' : AppendablePanelTable,
+ 'appendable_frame': AppendableFrameTable,
+ 'appendable_multiframe': AppendableMultiFrameTable,
+ 'appendable_panel': AppendablePanelTable,
+ 'appendable_ndim': AppendableNDimTable,
+ 'worm': WORMTable,
+ 'legacy_frame': LegacyFrameTable,
+ 'legacy_panel': LegacyPanelTable,
+ 'default': AppendablePanelTable,
}
-def create_table(parent, group, typ = None, **kwargs):
+
+def create_table(parent, group, typ=None, **kwargs):
""" return a suitable Table class to operate """
- pt = getattr(group._v_attrs,'pandas_type',None)
- tt = getattr(group._v_attrs,'table_type',None) or typ
+ pt = getattr(group._v_attrs, 'pandas_type', None)
+ tt = getattr(group._v_attrs, 'table_type', None) or typ
# a new node
if pt is None:
@@ -2351,7 +2421,8 @@ def create_table(parent, group, typ = None, **kwargs):
def _itemsize_string_array(arr):
""" return the maximum size of elements in a strnig array """
- return max([ str_len(arr[v].ravel()).max() for v in range(arr.shape[0]) ])
+ return max([str_len(arr[v].ravel()).max() for v in range(arr.shape[0])])
+
def _convert_index(index):
if isinstance(index, DatetimeIndex):
@@ -2373,12 +2444,12 @@ def _convert_index(index):
return IndexCol(converted, 'datetime64', _tables().Int64Col())
elif inferred_type == 'datetime':
converted = np.array([(time.mktime(v.timetuple()) +
- v.microsecond / 1E6) for v in values],
- dtype=np.float64)
+ v.microsecond / 1E6) for v in values],
+ dtype=np.float64)
return IndexCol(converted, 'datetime', _tables().Time64Col())
elif inferred_type == 'date':
converted = np.array([time.mktime(v.timetuple()) for v in values],
- dtype=np.int32)
+ dtype=np.int32)
return IndexCol(converted, 'date', _tables().Time32Col())
elif inferred_type == 'string':
# atom = _tables().ObjectAtom()
@@ -2386,7 +2457,7 @@ def _convert_index(index):
converted = np.array(list(values), dtype=np.str_)
itemsize = converted.dtype.itemsize
- return IndexCol(converted, 'string', _tables().StringCol(itemsize), itemsize = itemsize)
+ return IndexCol(converted, 'string', _tables().StringCol(itemsize), itemsize=itemsize)
elif inferred_type == 'unicode':
atom = _tables().ObjectAtom()
return IndexCol(np.asarray(values, dtype='O'), 'object', atom)
@@ -2533,19 +2604,19 @@ class Term(object):
"""
- _ops = ['<=','<','>=','>','!=','==','=']
- _search = re.compile("^\s*(?P<field>\w+)\s*(?P<op>%s)\s*(?P<value>.+)\s*$" % '|'.join(_ops))
+ _ops = ['<=', '<', '>=', '>', '!=', '==', '=']
+ _search = re.compile("^\s*(?P<field>\w+)\s*(?P<op>%s)\s*(?P<value>.+)\s*$" % '|'.join(_ops))
- def __init__(self, field, op = None, value = None, queryables = None):
+ def __init__(self, field, op=None, value=None, queryables=None):
self.field = None
- self.op = None
+ self.op = None
self.value = None
- self.q = queryables or dict()
- self.filter = None
- self.condition = None
+ self.q = queryables or dict()
+ self.filter = None
+ self.condition = None
# unpack lists/tuples in field
- while(isinstance(field,(tuple,list))):
+ while(isinstance(field, (tuple, list))):
f = field
field = f[0]
if len(f) > 1:
@@ -2556,23 +2627,23 @@ def __init__(self, field, op = None, value = None, queryables = None):
# backwards compatible
if isinstance(field, dict):
self.field = field.get('field')
- self.op = field.get('op') or '='
+ self.op = field.get('op') or '='
self.value = field.get('value')
# passed a term
- elif isinstance(field,Term):
+ elif isinstance(field, Term):
self.field = field.field
- self.op = field.op
+ self.op = field.op
self.value = field.value
# a string expression (or just the field)
- elif isinstance(field,basestring):
+ elif isinstance(field, basestring):
# is a term is passed
s = self._search.match(field)
if s is not None:
self.field = s.group('field')
- self.op = s.group('op')
+ self.op = s.group('op')
self.value = s.group('value')
else:
@@ -2580,14 +2651,15 @@ def __init__(self, field, op = None, value = None, queryables = None):
# is an op passed?
if isinstance(op, basestring) and op in self._ops:
- self.op = op
+ self.op = op
self.value = value
else:
- self.op = '='
+ self.op = '='
self.value = op
else:
- raise Exception("Term does not understand the supplied field [%s]" % field)
+ raise Exception(
+ "Term does not understand the supplied field [%s]" % field)
# we have valid fields
if self.field is None or self.op is None or self.value is None:
@@ -2598,18 +2670,18 @@ def __init__(self, field, op = None, value = None, queryables = None):
self.op = '='
# we have valid conditions
- if self.op in ['>','>=','<','<=']:
- if hasattr(self.value,'__iter__') and len(self.value) > 1:
+ if self.op in ['>', '>=', '<', '<=']:
+ if hasattr(self.value, '__iter__') and len(self.value) > 1:
raise Exception("an inequality condition cannot have multiple values [%s]" % str(self))
- if not hasattr(self.value,'__iter__'):
- self.value = [ self.value ]
+ if not hasattr(self.value, '__iter__'):
+ self.value = [self.value]
if len(self.q):
self.eval()
def __str__(self):
- return "field->%s,op->%s,value->%s" % (self.field,self.op,self.value)
+ return "field->%s,op->%s,value->%s" % (self.field, self.op, self.value)
__repr__ = __str__
@@ -2636,32 +2708,34 @@ def eval(self):
# convert values if we are in the table
if self.is_in_table:
- values = [ self.convert_value(v) for v in self.value ]
+ values = [self.convert_value(v) for v in self.value]
else:
- values = [ [v, v] for v in self.value ]
+ values = [[v, v] for v in self.value]
# equality conditions
- if self.op in ['=','!=']:
+ if self.op in ['=', '!=']:
if self.is_in_table:
# too many values to create the expression?
if len(values) <= 61:
- self.condition = "(%s)" % ' | '.join([ "(%s == %s)" % (self.field,v[0]) for v in values])
+ self.condition = "(%s)" % ' | '.join(
+ ["(%s == %s)" % (self.field, v[0]) for v in values])
# use a filter after reading
else:
- self.filter = (self.field,Index([ v[1] for v in values ]))
+ self.filter = (self.field, Index([v[1] for v in values]))
else:
- self.filter = (self.field,Index([ v[1] for v in values ]))
+ self.filter = (self.field, Index([v[1] for v in values]))
else:
if self.is_in_table:
- self.condition = '(%s %s %s)' % (self.field, self.op, values[0][0])
+ self.condition = '(%s %s %s)' % (
+ self.field, self.op, values[0][0])
else:
@@ -2670,9 +2744,9 @@ def eval(self):
def convert_value(self, v):
""" convert the expression that is in the term to something that is accepted by pytables """
- if self.kind == 'datetime64' :
+ if self.kind == 'datetime64':
return [lib.Timestamp(v).value, None]
- elif isinstance(v, datetime) or hasattr(v,'timetuple') or self.kind == 'date':
+ elif isinstance(v, datetime) or hasattr(v, 'timetuple') or self.kind == 'date':
return [time.mktime(v.timetuple()), None]
elif self.kind == 'integer':
v = int(float(v))
@@ -2686,6 +2760,7 @@ def convert_value(self, v):
# string quoting
return ["'" + v + "'", v]
+
class Coordinates(object):
""" holds a returned coordinates list, useful to select the same rows from different tables
@@ -2696,8 +2771,9 @@ class Coordinates(object):
def __init__(self, values, group, where, **kwargs):
self.values = values
- self.group = group
- self.where = where
+ self.group = group
+ self.where = where
+
class Selection(object):
"""
@@ -2711,23 +2787,23 @@ class Selection(object):
"""
def __init__(self, table, where=None, start=None, stop=None, **kwargs):
- self.table = table
- self.where = where
- self.start = start
- self.stop = stop
- self.condition = None
- self.filter = None
- self.terms = None
+ self.table = table
+ self.where = where
+ self.start = start
+ self.stop = stop
+ self.condition = None
+ self.filter = None
+ self.terms = None
self.coordinates = None
if isinstance(where, Coordinates):
self.coordinates = where.values
else:
- self.terms = self.generate(where)
+ self.terms = self.generate(where)
# create the numexpr & the filter
if self.terms:
- conds = [ t.condition for t in self.terms if t.condition is not None ]
+ conds = [t.condition for t in self.terms if t.condition is not None]
if len(conds):
self.condition = "(%s)" % ' & '.join(conds)
self.filter = []
@@ -2737,20 +2813,22 @@ def __init__(self, table, where=None, start=None, stop=None, **kwargs):
def generate(self, where):
""" where can be a : dict,list,tuple,string """
- if where is None: return None
+ if where is None:
+ return None
- if not isinstance(where, (list,tuple)):
- where = [ where ]
+ if not isinstance(where, (list, tuple)):
+ where = [where]
else:
- # make this a list of we think that we only have a sigle term & no operands inside any terms
- if not any([ isinstance(w, (list,tuple,Term)) for w in where ]):
+ # make this a list of we think that we only have a sigle term & no
+ # operands inside any terms
+ if not any([isinstance(w, (list, tuple, Term)) for w in where]):
- if not any([ isinstance(w,basestring) and Term._search.match(w) for w in where ]):
- where = [ where ]
+ if not any([isinstance(w, basestring) and Term._search.match(w) for w in where]):
+ where = [where]
queryables = self.table.queryables()
- return [ Term(c, queryables = queryables) for c in where ]
+ return [Term(c, queryables=queryables) for c in where]
def select(self):
"""
@@ -2760,7 +2838,7 @@ def select(self):
return self.table.table.readWhere(self.condition, start=self.start, stop=self.stop)
elif self.coordinates is not None:
return self.table.table.readCoordinates(self.coordinates)
- return self.table.table.read(start=self.start,stop=self.stop)
+ return self.table.table.read(start=self.start, stop=self.stop)
def select_coords(self):
"""
@@ -2769,7 +2847,7 @@ def select_coords(self):
if self.condition is None:
return np.arange(self.table.nrows)
- return self.table.table.getWhereList(self.condition, start=self.start, stop=self.stop, sort = True)
+ return self.table.table.getWhereList(self.condition, start=self.start, stop=self.stop, sort=True)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/__init__.py b/pandas/io/tests/__init__.py
index 8b137891791fe..e69de29bb2d1d 100644
--- a/pandas/io/tests/__init__.py
+++ b/pandas/io/tests/__init__.py
@@ -1 +0,0 @@
-
diff --git a/pandas/io/tests/test_cparser.py b/pandas/io/tests/test_cparser.py
index 9b5abf1c435a8..5a7e646eca0eb 100644
--- a/pandas/io/tests/test_cparser.py
+++ b/pandas/io/tests/test_cparser.py
@@ -35,6 +35,7 @@ def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
return pth
+
class TestCParser(unittest.TestCase):
def setUp(self):
@@ -222,7 +223,7 @@ def test_numpy_string_dtype(self):
def _make_reader(**kwds):
return TextReader(StringIO(data), delimiter=',', header=None,
- **kwds)
+ **kwds)
reader = _make_reader(dtype='S5,i4')
result = reader.read()
@@ -254,6 +255,7 @@ def test_pass_dtype(self):
2,b
3,c
4,d"""
+
def _make_reader(**kwds):
return TextReader(StringIO(data), delimiter=',', **kwds)
@@ -280,6 +282,7 @@ def test_usecols(self):
4,5,6
7,8,9
10,11,12"""
+
def _make_reader(**kwds):
return TextReader(StringIO(data), delimiter=',', **kwds)
@@ -331,6 +334,5 @@ def assert_array_dicts_equal(left, right):
assert(np.array_equal(v, right[k]))
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/io/tests/test_date_converters.py b/pandas/io/tests/test_date_converters.py
index 3c2a4611285ee..9396581f74326 100644
--- a/pandas/io/tests/test_date_converters.py
+++ b/pandas/io/tests/test_date_converters.py
@@ -23,6 +23,7 @@
from pandas.lib import Timestamp
import pandas.io.date_converters as conv
+
class TestConverters(unittest.TestCase):
def setUp(self):
@@ -52,16 +53,16 @@ def test_parse_date_time(self):
self.assert_('date_time' in df)
self.assert_(df.date_time.ix[0] == datetime(2001, 1, 5, 10, 0, 0))
- data = ("KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
- "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
- "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
- "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
- "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
- "KORD,19990127, 23:00:00, 22:56:00, -0.5900")
+ data = ("KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
+ "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
+ "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
+ "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
+ "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
+ "KORD,19990127, 23:00:00, 22:56:00, -0.5900")
date_spec = {'nominal': [1, 2], 'actual': [1, 3]}
df = read_csv(StringIO(data), header=None, parse_dates=date_spec,
- date_parser=conv.parse_date_time)
+ date_parser=conv.parse_date_time)
def test_parse_date_fields(self):
result = conv.parse_date_fields(self.years, self.months, self.days)
@@ -122,5 +123,5 @@ def test_generic(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 7169208531387..1b29a4bdd9bf2 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -35,24 +35,28 @@
from pandas.io.parsers import (ExcelFile, ExcelWriter, read_csv)
+
def _skip_if_no_xlrd():
try:
import xlrd
except ImportError:
raise nose.SkipTest('xlrd not installed, skipping')
+
def _skip_if_no_xlwt():
try:
import xlwt
except ImportError:
raise nose.SkipTest('xlwt not installed, skipping')
+
def _skip_if_no_openpyxl():
try:
import openpyxl
except ImportError:
raise nose.SkipTest('openpyxl not installed, skipping')
+
def _skip_if_no_excelsuite():
_skip_if_no_xlrd()
_skip_if_no_xlwt()
@@ -94,7 +98,7 @@ def test_parse_cols_int(self):
pth = os.path.join(self.dirpath, 'test.xls%s' % s)
xls = ExcelFile(pth)
df = xls.parse('Sheet1', index_col=0, parse_dates=True,
- parse_cols=3)
+ parse_cols=3)
df2 = self.read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = df2.reindex(columns=['A', 'B', 'C'])
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
@@ -112,12 +116,12 @@ def test_parse_cols_list(self):
pth = os.path.join(self.dirpath, 'test.xls%s' % s)
xls = ExcelFile(pth)
df = xls.parse('Sheet1', index_col=0, parse_dates=True,
- parse_cols=[0, 2, 3])
+ parse_cols=[0, 2, 3])
df2 = self.read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = df2.reindex(columns=['B', 'C'])
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
- parse_dates=True,
- parse_cols=[0, 2, 3])
+ parse_dates=True,
+ parse_cols=[0, 2, 3])
tm.assert_frame_equal(df, df2)
tm.assert_frame_equal(df3, df2)
@@ -133,7 +137,7 @@ def test_parse_cols_str(self):
xls = ExcelFile(pth)
df = xls.parse('Sheet1', index_col=0, parse_dates=True,
- parse_cols='A:D')
+ parse_cols='A:D')
df2 = read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = df2.reindex(columns=['A', 'B', 'C'])
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
@@ -143,23 +147,23 @@ def test_parse_cols_str(self):
del df, df2, df3
df = xls.parse('Sheet1', index_col=0, parse_dates=True,
- parse_cols='A,C,D')
+ parse_cols='A,C,D')
df2 = read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = df2.reindex(columns=['B', 'C'])
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
- parse_dates=True,
- parse_cols='A,C,D')
+ parse_dates=True,
+ parse_cols='A,C,D')
tm.assert_frame_equal(df, df2)
tm.assert_frame_equal(df3, df2)
del df, df2, df3
df = xls.parse('Sheet1', index_col=0, parse_dates=True,
- parse_cols='A,C:D')
+ parse_cols='A,C:D')
df2 = read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = df2.reindex(columns=['B', 'C'])
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
- parse_dates=True,
- parse_cols='A,C:D')
+ parse_dates=True,
+ parse_cols='A,C:D')
tm.assert_frame_equal(df, df2)
tm.assert_frame_equal(df3, df2)
@@ -168,7 +172,7 @@ def test_excel_stop_iterator(self):
excel_data = ExcelFile(os.path.join(self.dirpath, 'test2.xls'))
parsed = excel_data.parse('Sheet1')
- expected = DataFrame([['aaaa','bbbbb']], columns=['Test', 'Test1'])
+ expected = DataFrame([['aaaa', 'bbbbb']], columns=['Test', 'Test1'])
tm.assert_frame_equal(parsed, expected)
def test_excel_cell_error_na(self):
@@ -235,7 +239,6 @@ def read_csv(self, *args, **kwds):
kwds['engine'] = 'python'
return read_csv(*args, **kwds)
-
def test_excel_roundtrip_xls(self):
_skip_if_no_excelsuite()
self._check_extension('xls')
@@ -249,24 +252,24 @@ def _check_extension(self, ext):
self.frame['A'][:5] = nan
- self.frame.to_excel(path,'test1')
- self.frame.to_excel(path,'test1', cols=['A', 'B'])
- self.frame.to_excel(path,'test1', header=False)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
# test roundtrip
- self.frame.to_excel(path,'test1')
+ self.frame.to_excel(path, 'test1')
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=0)
tm.assert_frame_equal(self.frame, recons)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame.to_excel(path, 'test1', index=False)
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=None)
recons.index = self.frame.index
tm.assert_frame_equal(self.frame, recons)
- self.frame.to_excel(path,'test1',na_rep='NA')
+ self.frame.to_excel(path, 'test1', na_rep='NA')
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=0, na_values=['NA'])
tm.assert_frame_equal(self.frame, recons)
@@ -287,7 +290,7 @@ def test_excel_roundtrip_xlsx_mixed(self):
def _check_extension_mixed(self, ext):
path = '__tmp_to_excel_from_excel_mixed__.' + ext
- self.mixed_frame.to_excel(path,'test1')
+ self.mixed_frame.to_excel(path, 'test1')
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=0)
tm.assert_frame_equal(self.mixed_frame, recons)
@@ -329,14 +332,14 @@ def _check_extension_int64(self, ext):
self.frame['A'][:5] = nan
- self.frame.to_excel(path,'test1')
- self.frame.to_excel(path,'test1', cols=['A', 'B'])
- self.frame.to_excel(path,'test1', header=False)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
- #Test np.int64, values read come back as float
- frame = DataFrame(np.random.randint(-10,10,size=(10,2)))
- frame.to_excel(path,'test1')
+ # Test np.int64, values read come back as float
+ frame = DataFrame(np.random.randint(-10, 10, size=(10, 2)))
+ frame.to_excel(path, 'test1')
reader = ExcelFile(path)
recons = reader.parse('test1').astype(np.int64)
tm.assert_frame_equal(frame, recons)
@@ -351,20 +354,19 @@ def test_excel_roundtrip_xlsx_bool(self):
_skip_if_no_excelsuite()
self._check_extension_bool('xlsx')
-
def _check_extension_bool(self, ext):
path = '__tmp_to_excel_from_excel_bool__.' + ext
self.frame['A'][:5] = nan
- self.frame.to_excel(path,'test1')
- self.frame.to_excel(path,'test1', cols=['A', 'B'])
- self.frame.to_excel(path,'test1', header=False)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
- #Test reading/writing np.bool8, roundtrip only works for xlsx
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
- frame.to_excel(path,'test1')
+ # Test reading/writing np.bool8, roundtrip only works for xlsx
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
+ frame.to_excel(path, 'test1')
reader = ExcelFile(path)
recons = reader.parse('test1').astype(np.bool8)
tm.assert_frame_equal(frame, recons)
@@ -379,26 +381,25 @@ def test_excel_roundtrip_xlsx_sheets(self):
_skip_if_no_excelsuite()
self._check_extension_sheets('xlsx')
-
def _check_extension_sheets(self, ext):
path = '__tmp_to_excel_from_excel_sheets__.' + ext
self.frame['A'][:5] = nan
- self.frame.to_excel(path,'test1')
- self.frame.to_excel(path,'test1', cols=['A', 'B'])
- self.frame.to_excel(path,'test1', header=False)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
# Test writing to separate sheets
writer = ExcelWriter(path)
- self.frame.to_excel(writer,'test1')
- self.tsframe.to_excel(writer,'test2')
+ self.frame.to_excel(writer, 'test1')
+ self.tsframe.to_excel(writer, 'test2')
writer.save()
reader = ExcelFile(path)
- recons = reader.parse('test1',index_col=0)
+ recons = reader.parse('test1', index_col=0)
tm.assert_frame_equal(self.frame, recons)
- recons = reader.parse('test2',index_col=0)
+ recons = reader.parse('test2', index_col=0)
tm.assert_frame_equal(self.tsframe, recons)
np.testing.assert_equal(2, len(reader.sheet_names))
np.testing.assert_equal('test1', reader.sheet_names[0])
@@ -419,10 +420,10 @@ def _check_extension_colaliases(self, ext):
self.frame['A'][:5] = nan
- self.frame.to_excel(path,'test1')
- self.frame.to_excel(path,'test1', cols=['A', 'B'])
- self.frame.to_excel(path,'test1', header=False)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
# column aliases
col_aliases = Index(['AA', 'X', 'Y', 'Z'])
@@ -448,27 +449,28 @@ def _check_extension_indexlabels(self, ext):
try:
self.frame['A'][:5] = nan
- self.frame.to_excel(path,'test1')
- self.frame.to_excel(path,'test1', cols=['A', 'B'])
- self.frame.to_excel(path,'test1', header=False)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
# test index_label
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
frame.to_excel(path, 'test1', index_label=['test'])
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=0).astype(np.int64)
frame.index.names = ['test']
self.assertEqual(frame.index.names, recons.index.names)
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
- frame.to_excel(path, 'test1', index_label=['test', 'dummy', 'dummy2'])
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
+ frame.to_excel(
+ path, 'test1', index_label=['test', 'dummy', 'dummy2'])
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=0).astype(np.int64)
frame.index.names = ['test']
self.assertEqual(frame.index.names, recons.index.names)
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
frame.to_excel(path, 'test1', index_label='test')
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=0).astype(np.int64)
@@ -477,12 +479,13 @@ def _check_extension_indexlabels(self, ext):
finally:
os.remove(path)
- #test index_labels in same row as column names
+ # test index_labels in same row as column names
path = '%s.xls' % tm.rands(10)
try:
self.frame.to_excel(path, 'test1',
cols=['A', 'B', 'C', 'D'], index=False)
- #take 'A' and 'B' as indexes (they are in same row as cols 'C', 'D')
+ # take 'A' and 'B' as indexes (they are in same row as cols 'C',
+ # 'D')
df = self.frame.copy()
df = df.set_index(['A', 'B'])
@@ -530,10 +533,10 @@ def test_excel_roundtrip_datetime(self):
def test_excel_roundtrip_bool(self):
_skip_if_no_openpyxl()
- #Test roundtrip np.bool8, does not seem to work for xls
+ # Test roundtrip np.bool8, does not seem to work for xls
path = '__tmp_excel_roundtrip_bool__.xlsx'
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
- frame.to_excel(path,'test1')
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
+ frame.to_excel(path, 'test1')
reader = ExcelFile(path)
recons = reader.parse('test1')
tm.assert_frame_equal(frame, recons)
@@ -563,11 +566,11 @@ def test_to_excel_multiindex_xlsx(self):
self._check_excel_multiindex('xlsx')
def _check_excel_multiindex(self, ext):
- path = '__tmp_to_excel_multiindex__' + ext + '__.'+ext
+ path = '__tmp_to_excel_multiindex__' + ext + '__.' + ext
frame = self.frame
old_index = frame.index
- arrays = np.arange(len(old_index)*2).reshape(2,-1)
+ arrays = np.arange(len(old_index) * 2).reshape(2, -1)
new_index = MultiIndex.from_arrays(arrays,
names=['first', 'second'])
frame.index = new_index
@@ -577,10 +580,10 @@ def _check_excel_multiindex(self, ext):
# round trip
frame.to_excel(path, 'test1')
reader = ExcelFile(path)
- df = reader.parse('test1', index_col=[0,1], parse_dates=False)
+ df = reader.parse('test1', index_col=[0, 1], parse_dates=False)
tm.assert_frame_equal(frame, df)
self.assertEqual(frame.index.names, df.index.names)
- self.frame.index = old_index # needed if setUP becomes a classmethod
+ self.frame.index = old_index # needed if setUP becomes a classmethod
os.remove(path)
@@ -602,9 +605,9 @@ def _check_excel_multiindex_dates(self, ext):
new_index = [old_index, np.arange(len(old_index))]
tsframe.index = MultiIndex.from_arrays(new_index)
- tsframe.to_excel(path, 'test1', index_label = ['time','foo'])
+ tsframe.to_excel(path, 'test1', index_label=['time', 'foo'])
reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=[0,1])
+ recons = reader.parse('test1', index_col=[0, 1])
tm.assert_frame_equal(tsframe, recons)
# infer index
@@ -613,7 +616,7 @@ def _check_excel_multiindex_dates(self, ext):
recons = reader.parse('test1')
tm.assert_frame_equal(tsframe, recons)
- self.tsframe.index = old_index # needed if setUP becomes classmethod
+ self.tsframe.index = old_index # needed if setUP becomes classmethod
os.remove(path)
@@ -670,11 +673,11 @@ def test_to_excel_styleconverter(self):
raise nose.SkipTest
hstyle = {"font": {"bold": True},
- "borders": {"top": "thin",
- "right": "thin",
- "bottom": "thin",
- "left": "thin"},
- "alignment": {"horizontal": "center"}}
+ "borders": {"top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin"},
+ "alignment": {"horizontal": "center"}}
xls_style = CellStyleConverter.to_xls(hstyle)
self.assertTrue(xls_style.font.bold)
self.assertEquals(xlwt.Borders.THIN, xls_style.borders.top)
@@ -726,7 +729,6 @@ def test_to_excel_styleconverter(self):
# filename = '__tmp_to_excel_header_styling_xls__.xls'
# pdf.to_excel(filename, 'test1')
-
# wbk = xlrd.open_workbook(filename,
# formatting_info=True)
# self.assertEquals(["test1"], wbk.sheet_names())
@@ -744,12 +746,8 @@ def test_to_excel_styleconverter(self):
# self.assertEquals(1, cell_xf.border.bottom_line_style)
# self.assertEquals(1, cell_xf.border.left_line_style)
# self.assertEquals(2, cell_xf.alignment.hor_align)
-
# os.remove(filename)
-
-
# def test_to_excel_header_styling_xlsx(self):
-
# import StringIO
# s = StringIO.StringIO(
# """Date,ticker,type,value
@@ -768,24 +766,19 @@ def test_to_excel_styleconverter(self):
# df = read_csv(s, parse_dates=["Date"])
# pdf = df.pivot_table(values="value", rows=["ticker"],
# cols=["Date", "type"])
-
# try:
# import openpyxl
# from openpyxl.cell import get_column_letter
# except ImportError:
# raise nose.SkipTest
-
# if openpyxl.__version__ < '1.6.1':
# raise nose.SkipTest
-
# # test xlsx_styling
# filename = '__tmp_to_excel_header_styling_xlsx__.xlsx'
# pdf.to_excel(filename, 'test1')
-
# wbk = openpyxl.load_workbook(filename)
# self.assertEquals(["test1"], wbk.get_sheet_names())
# ws = wbk.get_sheet_by_name('test1')
-
# xlsaddrs = ["%s2" % chr(i) for i in range(ord('A'), ord('H'))]
# xlsaddrs += ["A%s" % i for i in range(1, 6)]
# xlsaddrs += ["B1", "D1", "F1"]
@@ -802,13 +795,10 @@ def test_to_excel_styleconverter(self):
# cell.style.borders.left.border_style)
# self.assertEquals(openpyxl.style.Alignment.HORIZONTAL_CENTER,
# cell.style.alignment.horizontal)
-
# mergedcells_addrs = ["C1", "E1", "G1"]
# for maddr in mergedcells_addrs:
# self.assertTrue(ws.cell(maddr).merged)
-
# os.remove(filename)
-
def test_excel_010_hemstring(self):
try:
import xlwt
@@ -819,51 +809,53 @@ def test_excel_010_hemstring(self):
from pandas.util.testing import makeCustomDataframe as mkdf
# ensure limited functionality in 0.10
# override of #2370 until sorted out in 0.11
- def roundtrip(df,header=True,parser_hdr=0):
- path = '__tmp__test_xl_010_%s__.xls' % np.random.randint(1,10000)
- df.to_excel(path,header=header)
- xf = pd.ExcelFile(path)
- try:
- res = xf.parse(xf.sheet_names[0],header=parser_hdr)
- return res
- finally:
- os.remove(path)
+
+ def roundtrip(df, header=True, parser_hdr=0):
+ path = '__tmp__test_xl_010_%s__.xls' % np.random.randint(1, 10000)
+ df.to_excel(path, header=header)
+ xf = pd.ExcelFile(path)
+ try:
+ res = xf.parse(xf.sheet_names[0], header=parser_hdr)
+ return res
+ finally:
+ os.remove(path)
nrows = 5
ncols = 3
- for i in range(1,4): # row multindex upto nlevel=3
- for j in range(1,4): # col ""
- df = mkdf(nrows,ncols,r_idx_nlevels=i,c_idx_nlevels=j)
+ for i in range(1, 4): # row multindex upto nlevel=3
+ for j in range(1, 4): # col ""
+ df = mkdf(nrows, ncols, r_idx_nlevels=i, c_idx_nlevels=j)
res = roundtrip(df)
# shape
- self.assertEqual(res.shape,(nrows,ncols+i))
+ self.assertEqual(res.shape, (nrows, ncols + i))
# no nans
for r in range(len(res.index)):
for c in range(len(res.columns)):
- self.assertTrue(res.ix[r,c] is not np.nan)
+ self.assertTrue(res.ix[r, c] is not np.nan)
- for i in range(1,4): # row multindex upto nlevel=3
- for j in range(1,4): # col ""
- df = mkdf(nrows,ncols,r_idx_nlevels=i,c_idx_nlevels=j)
- res = roundtrip(df,False)
+ for i in range(1, 4): # row multindex upto nlevel=3
+ for j in range(1, 4): # col ""
+ df = mkdf(nrows, ncols, r_idx_nlevels=i, c_idx_nlevels=j)
+ res = roundtrip(df, False)
# shape
- self.assertEqual(res.shape,(nrows-1,ncols+i)) # first row taken as columns
+ self.assertEqual(res.shape, (
+ nrows - 1, ncols + i)) # first row taken as columns
# no nans
for r in range(len(res.index)):
for c in range(len(res.columns)):
- self.assertTrue(res.ix[r,c] is not np.nan)
+ self.assertTrue(res.ix[r, c] is not np.nan)
res = roundtrip(DataFrame([0]))
- self.assertEqual(res.shape,(1,1))
- self.assertTrue(res.ix[0,0] is not np.nan)
+ self.assertEqual(res.shape, (1, 1))
+ self.assertTrue(res.ix[0, 0] is not np.nan)
- res = roundtrip(DataFrame([0]),False,None)
- self.assertEqual(res.shape,(1,2))
- self.assertTrue(res.ix[0,0] is not np.nan)
+ res = roundtrip(DataFrame([0]), False, None)
+ self.assertEqual(res.shape, (1, 2))
+ self.assertTrue(res.ix[0, 0] is not np.nan)
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/tests/test_ga.py b/pandas/io/tests/test_ga.py
index 1f7f1de67e7a1..651430b655ab4 100644
--- a/pandas/io/tests/test_ga.py
+++ b/pandas/io/tests/test_ga.py
@@ -8,6 +8,7 @@
from pandas.util.testing import network, assert_frame_equal
from numpy.testing.decorators import slow
+
class TestGoogle(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -32,10 +33,10 @@ def test_getdata(self):
df = reader.get_data(
metrics=['avgTimeOnSite', 'visitors', 'newVisits',
'pageviewsPerVisit'],
- start_date = start_date,
- end_date = end_date,
+ start_date=start_date,
+ end_date=end_date,
dimensions=['date', 'hour'],
- parse_dates={'ts' : ['date', 'hour']})
+ parse_dates={'ts': ['date', 'hour']})
assert isinstance(df, DataFrame)
assert isinstance(df.index, pd.DatetimeIndex)
@@ -54,7 +55,7 @@ def test_getdata(self):
start_date=start_date,
end_date=end_date,
dimensions=['date', 'hour'],
- parse_dates={'ts' : ['date', 'hour']})
+ parse_dates={'ts': ['date', 'hour']})
assert_frame_equal(df, df2)
@@ -112,5 +113,5 @@ def test_iterator(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 73fd10c21c33d..297b8e291c681 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -71,35 +71,35 @@ def test_empty_string(self):
g,7,seven
"""
df = self.read_csv(StringIO(data))
- xp = DataFrame({'One' : ['a', 'b', np.nan, 'd', 'e', np.nan, 'g'],
- 'Two' : [1,2,3,4,5,6,7],
- 'Three' : ['one', 'two', 'three', np.nan, 'five',
- np.nan, 'seven']})
+ xp = DataFrame({'One': ['a', 'b', np.nan, 'd', 'e', np.nan, 'g'],
+ 'Two': [1, 2, 3, 4, 5, 6, 7],
+ 'Three': ['one', 'two', 'three', np.nan, 'five',
+ np.nan, 'seven']})
tm.assert_frame_equal(xp.reindex(columns=df.columns), df)
df = self.read_csv(StringIO(data), na_values={'One': [], 'Three': []},
- keep_default_na=False)
- xp = DataFrame({'One' : ['a', 'b', '', 'd', 'e', 'nan', 'g'],
- 'Two' : [1,2,3,4,5,6,7],
- 'Three' : ['one', 'two', 'three', 'nan', 'five',
- '', 'seven']})
+ keep_default_na=False)
+ xp = DataFrame({'One': ['a', 'b', '', 'd', 'e', 'nan', 'g'],
+ 'Two': [1, 2, 3, 4, 5, 6, 7],
+ 'Three': ['one', 'two', 'three', 'nan', 'five',
+ '', 'seven']})
tm.assert_frame_equal(xp.reindex(columns=df.columns), df)
- df = self.read_csv(StringIO(data), na_values=['a'], keep_default_na=False)
- xp = DataFrame({'One' : [np.nan, 'b', '', 'd', 'e', 'nan', 'g'],
- 'Two' : [1, 2, 3, 4, 5, 6, 7],
- 'Three' : ['one', 'two', 'three', 'nan', 'five', '',
- 'seven']})
+ df = self.read_csv(
+ StringIO(data), na_values=['a'], keep_default_na=False)
+ xp = DataFrame({'One': [np.nan, 'b', '', 'd', 'e', 'nan', 'g'],
+ 'Two': [1, 2, 3, 4, 5, 6, 7],
+ 'Three': ['one', 'two', 'three', 'nan', 'five', '',
+ 'seven']})
tm.assert_frame_equal(xp.reindex(columns=df.columns), df)
df = self.read_csv(StringIO(data), na_values={'One': [], 'Three': []})
- xp = DataFrame({'One' : ['a', 'b', np.nan, 'd', 'e', np.nan, 'g'],
- 'Two' : [1,2,3,4,5,6,7],
- 'Three' : ['one', 'two', 'three', np.nan, 'five',
- np.nan, 'seven']})
+ xp = DataFrame({'One': ['a', 'b', np.nan, 'd', 'e', np.nan, 'g'],
+ 'Two': [1, 2, 3, 4, 5, 6, 7],
+ 'Three': ['one', 'two', 'three', np.nan, 'five',
+ np.nan, 'seven']})
tm.assert_frame_equal(xp.reindex(columns=df.columns), df)
-
def test_read_csv(self):
if not py3compat.PY3:
if 'win' in sys.platform:
@@ -150,13 +150,12 @@ def test_squeeze(self):
b,2
c,3
"""
- expected = Series([1,2,3], ['a', 'b', 'c'])
+ expected = Series([1, 2, 3], ['a', 'b', 'c'])
result = self.read_table(StringIO(data), sep=',', index_col=0,
- header=None, squeeze=True)
+ header=None, squeeze=True)
self.assert_(isinstance(result, Series))
assert_series_equal(result, expected)
-
def test_inf_parsing(self):
data = """\
,A
@@ -175,14 +174,15 @@ def test_multiple_date_col(self):
KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000
KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000
"""
+
def func(*date_cols):
return lib.try_parse_dates(parsers._concat_date_cols(date_cols))
df = self.read_csv(StringIO(data), header=None,
date_parser=func,
prefix='X',
- parse_dates={'nominal' : [1, 2],
- 'actual' : [1,3]})
+ parse_dates={'nominal': [1, 2],
+ 'actual': [1, 3]})
self.assert_('nominal' in df)
self.assert_('actual' in df)
self.assert_('X1' not in df)
@@ -194,8 +194,8 @@ def func(*date_cols):
df = self.read_csv(StringIO(data), header=None,
date_parser=func,
- parse_dates={'nominal' : [1, 2],
- 'actual' : [1,3]},
+ parse_dates={'nominal': [1, 2],
+ 'actual': [1, 3]},
keep_date_col=True)
self.assert_('nominal' in df)
self.assert_('actual' in df)
@@ -214,7 +214,7 @@ def func(*date_cols):
"""
df = read_csv(StringIO(data), header=None,
prefix='X',
- parse_dates=[[1, 2], [1,3]])
+ parse_dates=[[1, 2], [1, 3]])
self.assert_('X1_X2' in df)
self.assert_('X1_X3' in df)
@@ -226,7 +226,7 @@ def func(*date_cols):
self.assert_(df.ix[0, 'X1_X2'] == d)
df = read_csv(StringIO(data), header=None,
- parse_dates=[[1, 2], [1,3]], keep_date_col=True)
+ parse_dates=[[1, 2], [1, 3]], keep_date_col=True)
self.assert_('1_2' in df)
self.assert_('1_3' in df)
@@ -247,12 +247,12 @@ def func(*date_cols):
self.assert_(df.index[0] == d)
def test_multiple_date_cols_int_cast(self):
- data = ("KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
- "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
- "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
- "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
- "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
- "KORD,19990127, 23:00:00, 22:56:00, -0.5900")
+ data = ("KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
+ "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
+ "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
+ "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
+ "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
+ "KORD,19990127, 23:00:00, 22:56:00, -0.5900")
date_spec = {'nominal': [1, 2], 'actual': [1, 3]}
import pandas.io.date_converters as conv
@@ -301,7 +301,7 @@ def test_multiple_date_cols_with_header(self):
def test_multiple_date_col_name_collision(self):
self.assertRaises(ValueError, self.read_csv, StringIO(self.ts_data),
- parse_dates={'ID' : [1, 2]})
+ parse_dates={'ID': [1, 2]})
data = """\
date_NominalTime,date,NominalTime,ActualTime,TDew,TAir,Windspeed,Precip,WindDir
@@ -326,7 +326,7 @@ def test_index_col_named(self):
h = "ID,date,NominalTime,ActualTime,TDew,TAir,Windspeed,Precip,WindDir\n"
data = h + no_header
- #import pdb; pdb.set_trace()
+ # import pdb; pdb.set_trace()
rs = self.read_csv(StringIO(data), index_col='ID')
xp = self.read_csv(StringIO(data), header=0).set_index('ID')
tm.assert_frame_equal(rs, xp)
@@ -340,8 +340,8 @@ def test_index_col_named(self):
9,10,11,12,foo
"""
names = ['a', 'b', 'c', 'd', 'message']
- xp = DataFrame({'a' : [1, 5, 9], 'b' : [2, 6, 10], 'c' : [3, 7, 11],
- 'd' : [4, 8, 12]},
+ xp = DataFrame({'a': [1, 5, 9], 'b': [2, 6, 10], 'c': [3, 7, 11],
+ 'd': [4, 8, 12]},
index=Index(['hello', 'world', 'foo'], name='message'))
rs = self.read_csv(StringIO(data), names=names, index_col=['message'])
tm.assert_frame_equal(xp, rs)
@@ -352,13 +352,13 @@ def test_index_col_named(self):
self.assert_(xp.index.name == rs.index.name)
def test_converter_index_col_bug(self):
- #1835
+ # 1835
data = "A;B\n1;2\n3;4"
rs = self.read_csv(StringIO(data), sep=';', index_col='A',
- converters={'A' : lambda x: x})
+ converters={'A': lambda x: x})
- xp = DataFrame({'B' : [2, 4]}, index=Index([1, 3], name='A'))
+ xp = DataFrame({'B': [2, 4]}, index=Index([1, 3], name='A'))
tm.assert_frame_equal(rs, xp)
self.assert_(rs.index.name == xp.index.name)
@@ -376,12 +376,13 @@ def test_malformed(self):
"""
try:
- df = self.read_table(StringIO(data), sep=',', header=1, comment='#')
+ df = self.read_table(
+ StringIO(data), sep=',', header=1, comment='#')
self.assert_(False)
except Exception, inst:
self.assert_('Expected 3 fields in line 4, saw 5' in str(inst))
- #skip_footer
+ # skip_footer
data = """ignore
A,B,C
1,2,3 # comment
@@ -391,8 +392,9 @@ def test_malformed(self):
"""
try:
- df = self.read_table(StringIO(data), sep=',', header=1, comment='#',
- skip_footer=1)
+ df = self.read_table(
+ StringIO(data), sep=',', header=1, comment='#',
+ skip_footer=1)
self.assert_(False)
except Exception, inst:
self.assert_('Expected 3 fields in line 4, saw 5' in str(inst))
@@ -408,14 +410,13 @@ def test_malformed(self):
"""
try:
it = self.read_table(StringIO(data), sep=',',
- header=1, comment='#', iterator=True, chunksize=1,
- skiprows=[2])
+ header=1, comment='#', iterator=True, chunksize=1,
+ skiprows=[2])
df = it.read(5)
self.assert_(False)
except Exception, inst:
self.assert_('Expected 3 fields in line 6, saw 5' in str(inst))
-
# middle chunk
data = """ignore
A,B,C
@@ -433,7 +434,6 @@ def test_malformed(self):
except Exception, inst:
self.assert_('Expected 3 fields in line 6, saw 5' in str(inst))
-
# last chunk
data = """ignore
A,B,C
@@ -445,8 +445,8 @@ def test_malformed(self):
"""
try:
it = self.read_table(StringIO(data), sep=',',
- header=1, comment='#', iterator=True, chunksize=1,
- skiprows=[2])
+ header=1, comment='#', iterator=True, chunksize=1,
+ skiprows=[2])
df = it.read(1)
it.read()
self.assert_(False)
@@ -482,11 +482,11 @@ def test_custom_na_values(self):
assert_almost_equal(df.values, expected)
df2 = self.read_table(StringIO(data), sep=',', na_values=['baz'],
- skiprows=[1])
+ skiprows=[1])
assert_almost_equal(df2.values, expected)
df3 = self.read_table(StringIO(data), sep=',', na_values='baz',
- skiprows=[1])
+ skiprows=[1])
assert_almost_equal(df3.values, expected)
def test_skiprows_bug(self):
@@ -507,7 +507,7 @@ def test_skiprows_bug(self):
data2 = self.read_csv(StringIO(text), skiprows=6, header=None,
index_col=0, parse_dates=True)
- expected = DataFrame(np.arange(1., 10.).reshape((3,3)),
+ expected = DataFrame(np.arange(1., 10.).reshape((3, 3)),
columns=[1, 2, 3],
index=[datetime(2000, 1, 1), datetime(2000, 1, 2),
datetime(2000, 1, 3)])
@@ -533,9 +533,9 @@ def test_unnamed_columns(self):
6,7,8,9,10
11,12,13,14,15
"""
- expected = [[1,2,3,4,5.],
- [6,7,8,9,10],
- [11,12,13,14,15]]
+ expected = [[1, 2, 3, 4, 5.],
+ [6, 7, 8, 9, 10],
+ [11, 12, 13, 14, 15]]
df = self.read_table(StringIO(data), sep=',')
assert_almost_equal(df.values, expected)
self.assert_(np.array_equal(df.columns,
@@ -594,7 +594,8 @@ def test_parse_dates_implicit_first_col(self):
"""
df = self.read_csv(StringIO(data), parse_dates=True)
expected = self.read_csv(StringIO(data), index_col=0, parse_dates=True)
- self.assert_(isinstance(df.index[0], (datetime, np.datetime64, Timestamp)))
+ self.assert_(
+ isinstance(df.index[0], (datetime, np.datetime64, Timestamp)))
tm.assert_frame_equal(df, expected)
def test_parse_dates_string(self):
@@ -603,7 +604,8 @@ def test_parse_dates_string(self):
20090102,b,3,4
20090103,c,4,5
"""
- rs = self.read_csv(StringIO(data), index_col='date', parse_dates='date')
+ rs = self.read_csv(
+ StringIO(data), index_col='date', parse_dates='date')
idx = date_range('1/1/2009', periods=3).asobject
idx.name = 'date'
xp = DataFrame({'A': ['a', 'b', 'c'],
@@ -619,18 +621,18 @@ def test_yy_format(self):
"""
rs = self.read_csv(StringIO(data), index_col=0,
parse_dates=[['date', 'time']])
- idx = DatetimeIndex([datetime(2009,1,31,0,10,0),
- datetime(2009,2,28,10,20,0),
- datetime(2009,3,31,8,30,0)]).asobject
+ idx = DatetimeIndex([datetime(2009, 1, 31, 0, 10, 0),
+ datetime(2009, 2, 28, 10, 20, 0),
+ datetime(2009, 3, 31, 8, 30, 0)]).asobject
idx.name = 'date'
xp = DataFrame({'B': [1, 3, 5], 'C': [2, 4, 6]}, idx)
tm.assert_frame_equal(rs, xp)
rs = self.read_csv(StringIO(data), index_col=0,
parse_dates=[[0, 1]])
- idx = DatetimeIndex([datetime(2009,1,31,0,10,0),
- datetime(2009,2,28,10,20,0),
- datetime(2009,3,31,8,30,0)]).asobject
+ idx = DatetimeIndex([datetime(2009, 1, 31, 0, 10, 0),
+ datetime(2009, 2, 28, 10, 20, 0),
+ datetime(2009, 3, 31, 8, 30, 0)]).asobject
idx.name = 'date'
xp = DataFrame({'B': [1, 3, 5], 'C': [2, 4, 6]}, idx)
tm.assert_frame_equal(rs, xp)
@@ -653,11 +655,11 @@ def test_parse_dates_column_list(self):
expected['aux_date'] = map(Timestamp, expected['aux_date'])
self.assert_(isinstance(expected['aux_date'][0], datetime))
- df = self.read_csv(StringIO(data), sep=";", index_col = range(4),
+ df = self.read_csv(StringIO(data), sep=";", index_col=range(4),
parse_dates=[0, 5], dayfirst=True)
tm.assert_frame_equal(df, expected)
- df = self.read_csv(StringIO(data), sep=";", index_col = range(4),
+ df = self.read_csv(StringIO(data), sep=";", index_col=range(4),
parse_dates=['date', 'aux_date'], dayfirst=True)
tm.assert_frame_equal(df, expected)
@@ -672,9 +674,9 @@ def test_no_header(self):
names = ['foo', 'bar', 'baz', 'quux', 'panda']
df2 = self.read_table(StringIO(data), sep=',', names=names)
- expected = [[1,2,3,4,5.],
- [6,7,8,9,10],
- [11,12,13,14,15]]
+ expected = [[1, 2, 3, 4, 5.],
+ [6, 7, 8, 9, 10],
+ [11, 12, 13, 14, 15]]
assert_almost_equal(df.values, expected)
assert_almost_equal(df.values, df2.values)
@@ -694,9 +696,9 @@ def test_header_with_index_col(self):
self.assertEqual(names, ['A', 'B', 'C'])
- values = [[1,2,3],[4,5,6],[7,8,9]]
- expected = DataFrame(values, index=['foo','bar','baz'],
- columns=['A','B','C'])
+ values = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
+ expected = DataFrame(values, index=['foo', 'bar', 'baz'],
+ columns=['A', 'B', 'C'])
tm.assert_frame_equal(df, expected)
def test_read_csv_dataframe(self):
@@ -746,7 +748,7 @@ def test_read_table_duplicate_index(self):
result = self.read_csv(StringIO(data), index_col=0)
expected = self.read_csv(StringIO(data)).set_index('index',
- verify_integrity=False)
+ verify_integrity=False)
tm.assert_frame_equal(result, expected)
def test_read_table_duplicate_index_implicit(self):
@@ -834,7 +836,8 @@ def test_read_chunksize(self):
tm.assert_frame_equal(chunks[2], df[4:])
def test_read_chunksize_named(self):
- reader = self.read_csv(StringIO(self.data1), index_col='index', chunksize=2)
+ reader = self.read_csv(
+ StringIO(self.data1), index_col='index', chunksize=2)
df = self.read_csv(StringIO(self.data1), index_col='index')
chunks = list(reader)
@@ -845,11 +848,12 @@ def test_read_chunksize_named(self):
def test_read_text_list(self):
data = """A,B,C\nfoo,1,2,3\nbar,4,5,6"""
- as_list = [['A','B','C'],['foo','1','2','3'],['bar','4','5','6']]
+ as_list = [['A', 'B', 'C'], ['foo', '1', '2', '3'], ['bar',
+ '4', '5', '6']]
df = self.read_csv(StringIO(data), index_col=0)
parser = TextParser(as_list, index_col=0, chunksize=2)
- chunk = parser.read(None)
+ chunk = parser.read(None)
tm.assert_frame_equal(chunk, df)
@@ -886,7 +890,7 @@ def test_iterator(self):
self.assertRaises(ValueError, reader.read, 3)
treader = self.read_table(StringIO(self.data1), sep=',', index_col=0,
- iterator=True)
+ iterator=True)
self.assert_(isinstance(treader, TextFileReader))
def test_header_not_first_line(self):
@@ -1017,7 +1021,6 @@ def test_skip_footer(self):
tm.assert_frame_equal(result, expected)
-
def test_no_unnamed_index(self):
data = """ id c0 c1 c2
0 1 0 a b
@@ -1035,8 +1038,8 @@ def test_converters(self):
"""
from dateutil import parser
- result = self.read_csv(StringIO(data), converters={'D' : parser.parse})
- result2 = self.read_csv(StringIO(data), converters={3 : parser.parse})
+ result = self.read_csv(StringIO(data), converters={'D': parser.parse})
+ result2 = self.read_csv(StringIO(data), converters={3: parser.parse})
expected = self.read_csv(StringIO(data))
expected['D'] = expected['D'].map(parser.parse)
@@ -1047,13 +1050,13 @@ def test_converters(self):
# produce integer
converter = lambda x: int(x.split('/')[2])
- result = self.read_csv(StringIO(data), converters={'D' : converter})
+ result = self.read_csv(StringIO(data), converters={'D': converter})
expected = self.read_csv(StringIO(data))
expected['D'] = expected['D'].map(converter)
tm.assert_frame_equal(result, expected)
def test_converters_no_implicit_conv(self):
- #GH2184
+ # GH2184
data = """000102,1.2,A\n001245,2,B"""
f = lambda x: x.strip()
converter = {0: f}
@@ -1065,9 +1068,9 @@ def test_converters_euro_decimal_format(self):
1;1521,1541;187101,9543;ABC;poi;4,738797819
2;121,12;14897,76;DEF;uyt;0,377320872
3;878,158;108013,434;GHI;rez;2,735694704"""
- f = lambda x : float(x.replace(",", "."))
- converter = {'Number1':f,'Number2':f, 'Number3':f}
- df2 = self.read_csv(StringIO(data), sep=';',converters=converter)
+ f = lambda x: float(x.replace(",", "."))
+ converter = {'Number1': f, 'Number2': f, 'Number3': f}
+ df2 = self.read_csv(StringIO(data), sep=';', converters=converter)
self.assert_(df2['Number1'].dtype == float)
self.assert_(df2['Number2'].dtype == float)
self.assert_(df2['Number3'].dtype == float)
@@ -1078,9 +1081,9 @@ def test_converter_return_string_bug(self):
1;1521,1541;187101,9543;ABC;poi;4,738797819
2;121,12;14897,76;DEF;uyt;0,377320872
3;878,158;108013,434;GHI;rez;2,735694704"""
- f = lambda x : float(x.replace(",", "."))
- converter = {'Number1':f,'Number2':f, 'Number3':f}
- df2 = self.read_csv(StringIO(data), sep=';',converters=converter)
+ f = lambda x: float(x.replace(",", "."))
+ converter = {'Number1': f, 'Number2': f, 'Number3': f}
+ df2 = self.read_csv(StringIO(data), sep=';', converters=converter)
self.assert_(df2['Number1'].dtype == float)
def test_read_table_buglet_4x_multiindex(self):
@@ -1101,8 +1104,8 @@ def test_read_csv_parse_simple_list(self):
foo
bar"""
df = read_csv(StringIO(text), header=None)
- expected = DataFrame({0 : ['foo', 'bar baz', 'qux foo',
- 'foo', 'bar']})
+ expected = DataFrame({0: ['foo', 'bar baz', 'qux foo',
+ 'foo', 'bar']})
tm.assert_frame_equal(df, expected)
def test_parse_dates_custom_euroformat(self):
@@ -1120,7 +1123,7 @@ def test_parse_dates_custom_euroformat(self):
exp_index = Index([datetime(2010, 1, 31), datetime(2010, 2, 1),
datetime(2010, 2, 2)], name='time')
- expected = DataFrame({'Q' : [1, 1, 1], 'NTU' : [2, np.nan, 2]},
+ expected = DataFrame({'Q': [1, 1, 1], 'NTU': [2, np.nan, 2]},
index=exp_index, columns=['Q', 'NTU'])
tm.assert_frame_equal(df, expected)
@@ -1139,7 +1142,7 @@ def test_na_value_dict(self):
bar,foo,foo"""
df = self.read_csv(StringIO(data),
- na_values={'A': ['foo'], 'B': ['bar']})
+ na_values={'A': ['foo'], 'B': ['bar']})
expected = DataFrame({'A': [np.nan, 'bar', np.nan, 'bar'],
'B': [np.nan, 'foo', np.nan, 'foo'],
'C': [np.nan, 'foo', np.nan, 'foo']})
@@ -1177,7 +1180,7 @@ def test_url(self):
localtable = os.path.join(dirpath, 'salary.table')
local_table = self.read_table(localtable)
tm.assert_frame_equal(url_table, local_table)
- #TODO: ftp testing
+ # TODO: ftp testing
except urllib2.URLError:
try:
@@ -1199,7 +1202,7 @@ def test_file(self):
local_table = self.read_table(localtable)
try:
- url_table = self.read_table('file://localhost/'+localtable)
+ url_table = self.read_table('file://localhost/' + localtable)
except urllib2.URLError:
# fails on some systems
raise nose.SkipTest
@@ -1217,7 +1220,7 @@ def test_parse_tz_aware(self):
self.assert_(stamp.minute == 39)
try:
self.assert_(result.index.tz is pytz.utc)
- except AssertionError: # hello Yaroslav
+ except AssertionError: # hello Yaroslav
arr = result.index.to_pydatetime()
result = tools.to_datetime(arr, utc=True)[0]
self.assert_(stamp.minute == result.minute)
@@ -1246,8 +1249,10 @@ def test_multiple_date_cols_index(self):
tm.assert_frame_equal(df3, df)
def test_multiple_date_cols_chunked(self):
- df = self.read_csv(StringIO(self.ts_data), parse_dates={'nominal': [1,2]}, index_col='nominal')
- reader = self.read_csv(StringIO(self.ts_data), parse_dates={'nominal': [1,2]}, index_col='nominal', chunksize=2)
+ df = self.read_csv(StringIO(self.ts_data), parse_dates={
+ 'nominal': [1, 2]}, index_col='nominal')
+ reader = self.read_csv(StringIO(self.ts_data), parse_dates={'nominal':
+ [1, 2]}, index_col='nominal', chunksize=2)
chunks = list(reader)
@@ -1259,20 +1264,20 @@ def test_multiple_date_cols_chunked(self):
def test_multiple_date_col_named_components(self):
xp = self.read_csv(StringIO(self.ts_data),
- parse_dates={'nominal': [1,2]},
+ parse_dates={'nominal': [1, 2]},
index_col='nominal')
- colspec = {'nominal' : ['date', 'nominalTime']}
+ colspec = {'nominal': ['date', 'nominalTime']}
df = self.read_csv(StringIO(self.ts_data), parse_dates=colspec,
index_col='nominal')
tm.assert_frame_equal(df, xp)
def test_multiple_date_col_multiple_index(self):
df = self.read_csv(StringIO(self.ts_data),
- parse_dates={'nominal' : [1, 2]},
+ parse_dates={'nominal': [1, 2]},
index_col=['nominal', 'ID'])
xp = self.read_csv(StringIO(self.ts_data),
- parse_dates={'nominal' : [1, 2]})
+ parse_dates={'nominal': [1, 2]})
tm.assert_frame_equal(xp.set_index(['nominal', 'ID']), df)
@@ -1299,7 +1304,7 @@ def test_bool_na_values(self):
result = self.read_csv(StringIO(data))
expected = DataFrame({'A': np.array([True, nan, False], dtype=object),
'B': np.array([False, True, nan], dtype=object),
- 'C': [True, False, True]})
+ 'C': [True, False, True]})
tm.assert_frame_equal(result, expected)
@@ -1316,7 +1321,6 @@ def test_missing_trailing_delimiters(self):
result = self.read_csv(StringIO(data))
self.assertTrue(result['D'].isnull()[1:].all())
-
def test_skipinitialspace(self):
s = ('"09-Apr-2012", "01:10:18.300", 2456026.548822908, 12849, '
'1.00361, 1.12551, 330.65659, 0355626618.16711, 73.48821, '
@@ -1357,11 +1361,11 @@ def test_utf16_bom_skiprows(self):
if py3compat.PY3:
# somewhat False since the code never sees bytes
from io import TextIOWrapper
- s =TextIOWrapper(s, encoding='utf-8')
+ s = TextIOWrapper(s, encoding='utf-8')
result = self.read_csv(path, encoding=enc, skiprows=2,
sep=sep)
- expected = self.read_csv(s,encoding='utf-8', skiprows=2,
+ expected = self.read_csv(s, encoding='utf-8', skiprows=2,
sep=sep)
tm.assert_frame_equal(result, expected)
@@ -1383,7 +1387,6 @@ def test_utf16_example(self):
result = self.read_table(buf, encoding='utf-16')
self.assertEquals(len(result), 50)
-
def test_converters_corner_with_nas(self):
# skip aberration observed on Win64 Python 3.2.2
if hash(np.int64(-1)) != -2:
@@ -1397,48 +1400,51 @@ def test_converters_corner_with_nas(self):
4,6-12,2"""
def convert_days(x):
- x = x.strip()
- if not x: return np.nan
+ x = x.strip()
+ if not x:
+ return np.nan
- is_plus = x.endswith('+')
- if is_plus:
- x = int(x[:-1]) + 1
- else:
- x = int(x)
- return x
+ is_plus = x.endswith('+')
+ if is_plus:
+ x = int(x[:-1]) + 1
+ else:
+ x = int(x)
+ return x
def convert_days_sentinel(x):
- x = x.strip()
- if not x: return np.nan
+ x = x.strip()
+ if not x:
+ return np.nan
- is_plus = x.endswith('+')
- if is_plus:
- x = int(x[:-1]) + 1
- else:
- x = int(x)
- return x
+ is_plus = x.endswith('+')
+ if is_plus:
+ x = int(x[:-1]) + 1
+ else:
+ x = int(x)
+ return x
def convert_score(x):
- x = x.strip()
- if not x: return np.nan
- if x.find('-')>0:
- valmin, valmax = map(int, x.split('-'))
- val = 0.5*(valmin + valmax)
- else:
- val = float(x)
+ x = x.strip()
+ if not x:
+ return np.nan
+ if x.find('-') > 0:
+ valmin, valmax = map(int, x.split('-'))
+ val = 0.5 * (valmin + valmax)
+ else:
+ val = float(x)
- return val
+ return val
fh = StringIO.StringIO(csv)
- result = self.read_csv(fh, converters={'score':convert_score,
- 'days':convert_days},
- na_values=['',None])
+ result = self.read_csv(fh, converters={'score': convert_score,
+ 'days': convert_days},
+ na_values=['', None])
self.assert_(pd.isnull(result['days'][1]))
fh = StringIO.StringIO(csv)
- result2 = self.read_csv(fh, converters={'score':convert_score,
- 'days':convert_days_sentinel},
- na_values=['',None])
+ result2 = self.read_csv(fh, converters={'score': convert_score,
+ 'days': convert_days_sentinel},
+ na_values=['', None])
tm.assert_frame_equal(result, result2)
def test_unicode_encoding(self):
@@ -1467,7 +1473,8 @@ def test_trailing_delimiters(self):
tm.assert_frame_equal(result, expected)
def test_escapechar(self):
- # http://stackoverflow.com/questions/13824840/feature-request-for-pandas-read-csv
+ # http://stackoverflow.com/questions/13824840/feature-request-for-
+ # pandas-read-csv
data = '''SEARCH_TERM,ACTUAL_URL
"bra tv bord","http://www.ikea.com/se/sv/catalog/categories/departments/living_room/10475/?se%7cps%7cnonbranded%7cvardagsrum%7cgoogle%7ctv_bord"
"tv p\xc3\xa5 hjul","http://www.ikea.com/se/sv/catalog/categories/departments/living_room/10475/?se%7cps%7cnonbranded%7cvardagsrum%7cgoogle%7ctv_bord"
@@ -1540,7 +1547,6 @@ def test_sniff_delimiter(self):
sep=None, skiprows=2)
tm.assert_frame_equal(data, data3)
-
text = u"""ignore this
ignore this too
index|A|B|C
@@ -1553,11 +1559,10 @@ def test_sniff_delimiter(self):
if py3compat.PY3:
# somewhat False since the code never sees bytes
from io import TextIOWrapper
- s =TextIOWrapper(s, encoding='utf-8')
-
+ s = TextIOWrapper(s, encoding='utf-8')
data4 = self.read_csv(s, index_col=0, sep=None, skiprows=2,
- encoding='utf-8')
+ encoding='utf-8')
tm.assert_frame_equal(data, data4)
def test_regex_separator(self):
@@ -1568,7 +1573,7 @@ def test_regex_separator(self):
"""
df = self.read_table(StringIO(data), sep='\s+')
expected = self.read_csv(StringIO(re.sub('[ ]+', ',', data)),
- index_col=0)
+ index_col=0)
self.assert_(expected.index.name is None)
tm.assert_frame_equal(df, expected)
@@ -1579,7 +1584,7 @@ def test_1000_fwf(self):
"""
expected = [[1, 2334., 5],
[10, 13, 10]]
- df = read_fwf(StringIO(data), colspecs=[(0,3),(3,11),(12,16)],
+ df = read_fwf(StringIO(data), colspecs=[(0, 3), (3, 11), (12, 16)],
thousands=',')
assert_almost_equal(df.values, expected)
@@ -1590,7 +1595,7 @@ def test_comment_fwf(self):
"""
expected = [[1, 2., 4],
[5, np.nan, 10.]]
- df = read_fwf(StringIO(data), colspecs=[(0,3),(4,9),(9,25)],
+ df = read_fwf(StringIO(data), colspecs=[(0, 3), (4, 9), (9, 25)],
comment='#')
assert_almost_equal(df.values, expected)
@@ -1635,13 +1640,13 @@ def test_fwf(self):
201161~~~~413.836124~~~184.375703~~~11916.8
201162~~~~502.953953~~~173.237159~~~12468.3
"""
- df = read_fwf(StringIO(data3), colspecs=colspecs, delimiter='~', header=None)
+ df = read_fwf(
+ StringIO(data3), colspecs=colspecs, delimiter='~', header=None)
tm.assert_frame_equal(df, expected)
self.assertRaises(ValueError, read_fwf, StringIO(data3),
colspecs=colspecs, widths=[6, 10, 10, 7])
-
def test_verbose_import(self):
text = """a,b,c,d
one,1,2,3
@@ -1750,6 +1755,7 @@ def test_parse_dates_empty_string(self):
result = pd.read_csv(s, parse_dates=["Date"], na_filter=False)
self.assertTrue(result['Date'].isnull()[1])
+
class TestCParserLowMemory(ParserTests, unittest.TestCase):
def read_csv(self, *args, **kwds):
@@ -1849,7 +1855,8 @@ def test_pure_python_failover(self):
def test_decompression(self):
try:
- import gzip, bz2
+ import gzip
+ import bz2
except ImportError:
raise nose.SkipTest
@@ -1894,7 +1901,8 @@ def test_decompression(self):
def test_decompression_regex_sep(self):
try:
- import gzip, bz2
+ import gzip
+ import bz2
except ImportError:
raise nose.SkipTest
@@ -2066,5 +2074,5 @@ def curpath():
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 86b183c7bfc76..1c8ec54b65487 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -23,10 +23,11 @@
from distutils.version import LooseVersion
_default_compressor = LooseVersion(tables.__version__) >= '2.2' \
- and 'blosc' or 'zlib'
+ and 'blosc' or 'zlib'
_multiprocess_can_split_ = False
+
class TestHDFStore(unittest.TestCase):
path = '__test__.h5'
scratchpath = '__scratch__.h5'
@@ -61,7 +62,8 @@ def test_keys(self):
self.store['d'] = tm.makePanel()
self.store['foo/bar'] = tm.makePanel()
self.assertEquals(len(self.store), 5)
- self.assert_(set(self.store.keys()) == set(['/a', '/b', '/c', '/d', '/foo/bar']))
+ self.assert_(set(
+ self.store.keys()) == set(['/a', '/b', '/c', '/d', '/foo/bar']))
def test_repr(self):
repr(self.store)
@@ -101,14 +103,15 @@ def test_versioning(self):
self.store.remove('df2')
self.store.append('df2', df)
- # this is an error because its table_type is appendable, but no version info
+ # this is an error because its table_type is appendable, but no version
+ # info
self.store.get_node('df2')._v_attrs.pandas_version = None
- self.assertRaises(Exception, self.store.select,'df2')
+ self.assertRaises(Exception, self.store.select, 'df2')
def test_meta(self):
raise nose.SkipTest('no meta')
- meta = { 'foo' : [ 'I love pandas ' ] }
+ meta = {'foo': ['I love pandas ']}
s = tm.makeTimeSeries()
s.meta = meta
self.store['a'] = s
@@ -124,12 +127,12 @@ def test_meta(self):
self.store.append('df1', df[:10])
self.store.append('df1', df[10:])
results = self.store['df1']
- #self.assert_(getattr(results,'meta',None) == meta)
+ # self.assert_(getattr(results,'meta',None) == meta)
# no meta
df = tm.makeDataFrame()
self.store['b'] = df
- self.assert_(hasattr(self.store['b'],'meta') == False)
+ self.assert_(hasattr(self.store['b'], 'meta') is False)
def test_reopen_handle(self):
self.store['a'] = tm.makeTimeSeries()
@@ -164,11 +167,13 @@ def test_put(self):
self.store.put('c', df[:10], table=True)
# not OK, not a table
- self.assertRaises(ValueError, self.store.put, 'b', df[10:], append=True)
+ self.assertRaises(
+ ValueError, self.store.put, 'b', df[10:], append=True)
# node does not currently exist, test _is_table_type returns False in
# this case
- self.assertRaises(ValueError, self.store.put, 'f', df[10:], append=True)
+ self.assertRaises(
+ ValueError, self.store.put, 'f', df[10:], append=True)
# OK
self.store.put('c', df[10:], append=True)
@@ -179,9 +184,10 @@ def test_put(self):
def test_put_string_index(self):
- index = Index([ "I am a very long string index: %s" % i for i in range(20) ])
- s = Series(np.arange(20), index = index)
- df = DataFrame({ 'A' : s, 'B' : s })
+ index = Index(
+ ["I am a very long string index: %s" % i for i in range(20)])
+ s = Series(np.arange(20), index=index)
+ df = DataFrame({'A': s, 'B': s})
self.store['a'] = s
tm.assert_series_equal(self.store['a'], s)
@@ -190,16 +196,15 @@ def test_put_string_index(self):
tm.assert_frame_equal(self.store['b'], df)
# mixed length
- index = Index(['abcdefghijklmnopqrstuvwxyz1234567890'] + [ "I am a very long string index: %s" % i for i in range(20) ])
- s = Series(np.arange(21), index = index)
- df = DataFrame({ 'A' : s, 'B' : s })
+ index = Index(['abcdefghijklmnopqrstuvwxyz1234567890'] + ["I am a very long string index: %s" % i for i in range(20)])
+ s = Series(np.arange(21), index=index)
+ df = DataFrame({'A': s, 'B': s})
self.store['a'] = s
tm.assert_series_equal(self.store['a'], s)
self.store['b'] = df
tm.assert_frame_equal(self.store['b'], df)
-
def test_put_compression(self):
df = tm.makeTimeDataFrame()
@@ -251,25 +256,27 @@ def test_append(self):
self.store.append('/df3 foo', df[10:])
tm.assert_frame_equal(self.store['df3 foo'], df)
warnings.filterwarnings('always', category=tables.NaturalNameWarning)
-
+
# panel
wp = tm.makePanel()
self.store.remove('wp1')
- self.store.append('wp1', wp.ix[:,:10,:])
- self.store.append('wp1', wp.ix[:,10:,:])
+ self.store.append('wp1', wp.ix[:, :10, :])
+ self.store.append('wp1', wp.ix[:, 10:, :])
tm.assert_panel_equal(self.store['wp1'], wp)
# ndim
p4d = tm.makePanel4D()
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:])
- self.store.append('p4d', p4d.ix[:,:,10:,:])
+ self.store.append('p4d', p4d.ix[:, :, :10, :])
+ self.store.append('p4d', p4d.ix[:, :, 10:, :])
tm.assert_panel4d_equal(self.store['p4d'], p4d)
# test using axis labels
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:], axes=['items','major_axis','minor_axis'])
- self.store.append('p4d', p4d.ix[:,:,10:,:], axes=['items','major_axis','minor_axis'])
+ self.store.append('p4d', p4d.ix[:, :, :10, :], axes=[
+ 'items', 'major_axis', 'minor_axis'])
+ self.store.append('p4d', p4d.ix[:, :, 10:, :], axes=[
+ 'items', 'major_axis', 'minor_axis'])
tm.assert_panel4d_equal(self.store['p4d'], p4d)
# test using differnt number of items on each axis
@@ -277,32 +284,33 @@ def test_append(self):
p4d2['l4'] = p4d['l1']
p4d2['l5'] = p4d['l1']
self.store.remove('p4d2')
- self.store.append('p4d2', p4d2, axes=['items','major_axis','minor_axis'])
+ self.store.append(
+ 'p4d2', p4d2, axes=['items', 'major_axis', 'minor_axis'])
tm.assert_panel4d_equal(self.store['p4d2'], p4d2)
# test using differt order of items on the non-index axes
self.store.remove('wp1')
- wp_append1 = wp.ix[:,:10,:]
+ wp_append1 = wp.ix[:, :10, :]
self.store.append('wp1', wp_append1)
- wp_append2 = wp.ix[:,10:,:].reindex(items = wp.items[::-1])
- self.store.append('wp1', wp_append2)
+ wp_append2 = wp.ix[:, 10:, :].reindex(items=wp.items[::-1])
+ self.store.append('wp1', wp_append2)
tm.assert_panel_equal(self.store['wp1'], wp)
-
+
# dtype issues - mizxed type in a single object column
- df = DataFrame(data=[[1,2],[0,1],[1,2],[0,0]])
+ df = DataFrame(data=[[1, 2], [0, 1], [1, 2], [0, 0]])
df['mixed_column'] = 'testing'
- df.ix[2,'mixed_column'] = np.nan
+ df.ix[2, 'mixed_column'] = np.nan
self.store.remove('df')
self.store.append('df', df)
- tm.assert_frame_equal(self.store['df'],df)
+ tm.assert_frame_equal(self.store['df'], df)
def test_append_frame_column_oriented(self):
# column oriented
df = tm.makeTimeDataFrame()
self.store.remove('df1')
- self.store.append('df1', df.ix[:,:2], axes = ['columns'])
- self.store.append('df1', df.ix[:,2:])
+ self.store.append('df1', df.ix[:, :2], axes=['columns'])
+ self.store.append('df1', df.ix[:, 2:])
tm.assert_frame_equal(self.store['df1'], df)
result = self.store.select('df1', 'columns=A')
@@ -310,11 +318,13 @@ def test_append_frame_column_oriented(self):
tm.assert_frame_equal(expected, result)
# this isn't supported
- self.assertRaises(Exception, self.store.select, 'df1', ('columns=A', Term('index','>',df.index[4])))
+ self.assertRaises(Exception, self.store.select, 'df1', (
+ 'columns=A', Term('index', '>', df.index[4])))
# selection on the non-indexable
- result = self.store.select('df1', ('columns=A', Term('index','=',df.index[0:4])))
- expected = df.reindex(columns=['A'],index=df.index[0:4])
+ result = self.store.select(
+ 'df1', ('columns=A', Term('index', '=', df.index[0:4])))
+ expected = df.reindex(columns=['A'], index=df.index[0:4])
tm.assert_frame_equal(expected, result)
def test_ndim_indexables(self):
@@ -323,84 +333,92 @@ def test_ndim_indexables(self):
p4d = tm.makePanel4D()
def check_indexers(key, indexers):
- for i,idx in enumerate(indexers):
- self.assert_(getattr(getattr(self.store.root,key).table.description,idx)._v_pos == i)
+ for i, idx in enumerate(indexers):
+ self.assert_(getattr(getattr(
+ self.store.root, key).table.description, idx)._v_pos == i)
# append then change (will take existing schema)
- indexers = ['items','major_axis','minor_axis']
-
+ indexers = ['items', 'major_axis', 'minor_axis']
+
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
- self.store.append('p4d', p4d.ix[:,:,10:,:])
- tm.assert_panel4d_equal(self.store.select('p4d'),p4d)
- check_indexers('p4d',indexers)
+ self.store.append('p4d', p4d.ix[:, :, :10, :], axes=indexers)
+ self.store.append('p4d', p4d.ix[:, :, 10:, :])
+ tm.assert_panel4d_equal(self.store.select('p4d'), p4d)
+ check_indexers('p4d', indexers)
# same as above, but try to append with differnt axes
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
- self.store.append('p4d', p4d.ix[:,:,10:,:], axes=['labels','items','major_axis'])
- tm.assert_panel4d_equal(self.store.select('p4d'),p4d)
- check_indexers('p4d',indexers)
+ self.store.append('p4d', p4d.ix[:, :, :10, :], axes=indexers)
+ self.store.append('p4d', p4d.ix[:, :, 10:, :], axes=[
+ 'labels', 'items', 'major_axis'])
+ tm.assert_panel4d_equal(self.store.select('p4d'), p4d)
+ check_indexers('p4d', indexers)
# pass incorrect number of axes
self.store.remove('p4d')
- self.assertRaises(Exception, self.store.append, 'p4d', p4d.ix[:,:,:10,:], axes=['major_axis','minor_axis'])
+ self.assertRaises(Exception, self.store.append, 'p4d', p4d.ix[
+ :, :, :10, :], axes=['major_axis', 'minor_axis'])
# different than default indexables #1
- indexers = ['labels','major_axis','minor_axis']
+ indexers = ['labels', 'major_axis', 'minor_axis']
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
- self.store.append('p4d', p4d.ix[:,:,10:,:])
+ self.store.append('p4d', p4d.ix[:, :, :10, :], axes=indexers)
+ self.store.append('p4d', p4d.ix[:, :, 10:, :])
tm.assert_panel4d_equal(self.store['p4d'], p4d)
- check_indexers('p4d',indexers)
-
+ check_indexers('p4d', indexers)
+
# different than default indexables #2
- indexers = ['major_axis','labels','minor_axis']
+ indexers = ['major_axis', 'labels', 'minor_axis']
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
- self.store.append('p4d', p4d.ix[:,:,10:,:])
+ self.store.append('p4d', p4d.ix[:, :, :10, :], axes=indexers)
+ self.store.append('p4d', p4d.ix[:, :, 10:, :])
tm.assert_panel4d_equal(self.store['p4d'], p4d)
- check_indexers('p4d',indexers)
+ check_indexers('p4d', indexers)
# partial selection
- result = self.store.select('p4d',['labels=l1'])
- expected = p4d.reindex(labels = ['l1'])
+ result = self.store.select('p4d', ['labels=l1'])
+ expected = p4d.reindex(labels=['l1'])
tm.assert_panel4d_equal(result, expected)
# partial selection2
- result = self.store.select('p4d',[Term('labels=l1'), Term('items=ItemA'), Term('minor_axis=B')])
- expected = p4d.reindex(labels = ['l1'], items = ['ItemA'], minor_axis = ['B'])
+ result = self.store.select('p4d', [Term(
+ 'labels=l1'), Term('items=ItemA'), Term('minor_axis=B')])
+ expected = p4d.reindex(
+ labels=['l1'], items=['ItemA'], minor_axis=['B'])
tm.assert_panel4d_equal(result, expected)
# non-existant partial selection
- result = self.store.select('p4d',[Term('labels=l1'), Term('items=Item1'), Term('minor_axis=B')])
- expected = p4d.reindex(labels = ['l1'], items = [], minor_axis = ['B'])
+ result = self.store.select('p4d', [Term(
+ 'labels=l1'), Term('items=Item1'), Term('minor_axis=B')])
+ expected = p4d.reindex(labels=['l1'], items=[], minor_axis=['B'])
tm.assert_panel4d_equal(result, expected)
def test_append_with_strings(self):
wp = tm.makePanel()
- wp2 = wp.rename_axis(dict([ (x,"%s_extra" % x) for x in wp.minor_axis ]), axis = 2)
+ wp2 = wp.rename_axis(
+ dict([(x, "%s_extra" % x) for x in wp.minor_axis]), axis=2)
- def check_col(key,name,size):
- self.assert_(getattr(self.store.get_table(key).table.description,name).itemsize == size)
+ def check_col(key, name, size):
+ self.assert_(getattr(self.store.get_table(
+ key).table.description, name).itemsize == size)
- self.store.append('s1', wp, min_itemsize = 20)
+ self.store.append('s1', wp, min_itemsize=20)
self.store.append('s1', wp2)
- expected = concat([ wp, wp2], axis = 2)
- expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
+ expected = concat([wp, wp2], axis=2)
+ expected = expected.reindex(minor_axis=sorted(expected.minor_axis))
tm.assert_panel_equal(self.store['s1'], expected)
- check_col('s1','minor_axis',20)
+ check_col('s1', 'minor_axis', 20)
# test dict format
- self.store.append('s2', wp, min_itemsize = { 'minor_axis' : 20 })
+ self.store.append('s2', wp, min_itemsize={'minor_axis': 20})
self.store.append('s2', wp2)
- expected = concat([ wp, wp2], axis = 2)
- expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
+ expected = concat([wp, wp2], axis=2)
+ expected = expected.reindex(minor_axis=sorted(expected.minor_axis))
tm.assert_panel_equal(self.store['s2'], expected)
- check_col('s2','minor_axis',20)
+ check_col('s2', 'minor_axis', 20)
# apply the wrong field (similar to #1)
- self.store.append('s3', wp, min_itemsize = { 'major_axis' : 20 })
+ self.store.append('s3', wp, min_itemsize={'major_axis': 20})
self.assertRaises(Exception, self.store.append, 's3')
# test truncation of bigger strings
@@ -408,99 +426,104 @@ def check_col(key,name,size):
self.assertRaises(Exception, self.store.append, 's4', wp2)
# avoid truncation on elements
- df = DataFrame([[123,'asdqwerty'], [345,'dggnhebbsdfbdfb']])
- self.store.append('df_big',df)
+ df = DataFrame([[123, 'asdqwerty'], [345, 'dggnhebbsdfbdfb']])
+ self.store.append('df_big', df)
tm.assert_frame_equal(self.store.select('df_big'), df)
- check_col('df_big','values_block_1',15)
+ check_col('df_big', 'values_block_1', 15)
# appending smaller string ok
- df2 = DataFrame([[124,'asdqy'], [346,'dggnhefbdfb']])
- self.store.append('df_big',df2)
- expected = concat([ df, df2 ])
+ df2 = DataFrame([[124, 'asdqy'], [346, 'dggnhefbdfb']])
+ self.store.append('df_big', df2)
+ expected = concat([df, df2])
tm.assert_frame_equal(self.store.select('df_big'), expected)
- check_col('df_big','values_block_1',15)
+ check_col('df_big', 'values_block_1', 15)
# avoid truncation on elements
- df = DataFrame([[123,'asdqwerty'], [345,'dggnhebbsdfbdfb']])
- self.store.append('df_big2',df, min_itemsize = { 'values' : 50 })
+ df = DataFrame([[123, 'asdqwerty'], [345, 'dggnhebbsdfbdfb']])
+ self.store.append('df_big2', df, min_itemsize={'values': 50})
tm.assert_frame_equal(self.store.select('df_big2'), df)
- check_col('df_big2','values_block_1',50)
+ check_col('df_big2', 'values_block_1', 50)
# bigger string on next append
- self.store.append('df_new',df)
- df_new = DataFrame([[124,'abcdefqhij'], [346, 'abcdefghijklmnopqrtsuvwxyz']])
- self.assertRaises(Exception, self.store.append, 'df_new',df_new)
+ self.store.append('df_new', df)
+ df_new = DataFrame(
+ [[124, 'abcdefqhij'], [346, 'abcdefghijklmnopqrtsuvwxyz']])
+ self.assertRaises(Exception, self.store.append, 'df_new', df_new)
# with nans
self.store.remove('df')
df = tm.makeTimeDataFrame()
df['string'] = 'foo'
- df.ix[1:4,'string'] = np.nan
+ df.ix[1:4, 'string'] = np.nan
df['string2'] = 'bar'
- df.ix[4:8,'string2'] = np.nan
+ df.ix[4:8, 'string2'] = np.nan
df['string3'] = 'bah'
- df.ix[1:,'string3'] = np.nan
- self.store.append('df',df)
+ df.ix[1:, 'string3'] = np.nan
+ self.store.append('df', df)
result = self.store.select('df')
- tm.assert_frame_equal(result,df)
-
+ tm.assert_frame_equal(result, df)
def test_append_with_data_columns(self):
df = tm.makeTimeDataFrame()
self.store.remove('df')
- self.store.append('df', df[:2], data_columns = ['B'])
+ self.store.append('df', df[:2], data_columns=['B'])
self.store.append('df', df[2:])
tm.assert_frame_equal(self.store['df'], df)
# check that we have indicies created
- assert(self.store.handle.root.df.table.cols.index.is_indexed == True)
- assert(self.store.handle.root.df.table.cols.B.is_indexed == True)
+ assert(self.store.handle.root.df.table.cols.index.is_indexed is True)
+ assert(self.store.handle.root.df.table.cols.B.is_indexed is True)
# data column searching
- result = self.store.select('df', [ Term('B>0') ])
- expected = df[df.B>0]
+ result = self.store.select('df', [Term('B>0')])
+ expected = df[df.B > 0]
tm.assert_frame_equal(result, expected)
# data column searching (with an indexable and a data_columns)
- result = self.store.select('df', [ Term('B>0'), Term('index','>',df.index[3]) ])
+ result = self.store.select(
+ 'df', [Term('B>0'), Term('index', '>', df.index[3])])
df_new = df.reindex(index=df.index[4:])
- expected = df_new[df_new.B>0]
+ expected = df_new[df_new.B > 0]
tm.assert_frame_equal(result, expected)
-
+
# data column selection with a string data_column
df_new = df.copy()
df_new['string'] = 'foo'
df_new['string'][1:4] = np.nan
df_new['string'][5:6] = 'bar'
self.store.remove('df')
- self.store.append('df', df_new, data_columns = ['string'])
- result = self.store.select('df', [ Term('string', '=', 'foo') ])
+ self.store.append('df', df_new, data_columns=['string'])
+ result = self.store.select('df', [Term('string', '=', 'foo')])
expected = df_new[df_new.string == 'foo']
tm.assert_frame_equal(result, expected)
# using min_itemsize and a data column
- def check_col(key,name,size):
- self.assert_(getattr(self.store.get_table(key).table.description,name).itemsize == size)
+ def check_col(key, name, size):
+ self.assert_(getattr(self.store.get_table(
+ key).table.description, name).itemsize == size)
self.store.remove('df')
- self.store.append('df', df_new, data_columns = ['string'], min_itemsize = { 'string' : 30 })
- check_col('df','string',30)
+ self.store.append('df', df_new, data_columns=['string'],
+ min_itemsize={'string': 30})
+ check_col('df', 'string', 30)
self.store.remove('df')
- self.store.append('df', df_new, data_columns = ['string'], min_itemsize = 30)
- check_col('df','string',30)
+ self.store.append(
+ 'df', df_new, data_columns=['string'], min_itemsize=30)
+ check_col('df', 'string', 30)
self.store.remove('df')
- self.store.append('df', df_new, data_columns = ['string'], min_itemsize = { 'values' : 30 })
- check_col('df','string',30)
+ self.store.append('df', df_new, data_columns=['string'],
+ min_itemsize={'values': 30})
+ check_col('df', 'string', 30)
df_new['string2'] = 'foobarbah'
df_new['string_block1'] = 'foobarbah1'
df_new['string_block2'] = 'foobarbah2'
self.store.remove('df')
- self.store.append('df', df_new, data_columns = ['string','string2'], min_itemsize = { 'string' : 30, 'string2' : 40, 'values' : 50 })
- check_col('df','string',30)
- check_col('df','string2',40)
- check_col('df','values_block_1',50)
+ self.store.append('df', df_new, data_columns=['string', 'string2'], min_itemsize={'string': 30, 'string2': 40, 'values': 50})
+ check_col('df', 'string', 30)
+ check_col('df', 'string2', 40)
+ check_col('df', 'values_block_1', 50)
# multiple data columns
df_new = df.copy()
@@ -511,87 +534,96 @@ def check_col(key,name,size):
df_new['string2'][2:5] = np.nan
df_new['string2'][7:8] = 'bar'
self.store.remove('df')
- self.store.append('df', df_new, data_columns = ['A','B','string','string2'])
- result = self.store.select('df', [ Term('string', '=', 'foo'), Term('string2=foo'), Term('A>0'), Term('B<0') ])
- expected = df_new[(df_new.string == 'foo') & (df_new.string2 == 'foo') & (df_new.A > 0) & (df_new.B < 0)]
+ self.store.append(
+ 'df', df_new, data_columns=['A', 'B', 'string', 'string2'])
+ result = self.store.select('df', [Term('string', '=', 'foo'), Term(
+ 'string2=foo'), Term('A>0'), Term('B<0')])
+ expected = df_new[(df_new.string == 'foo') & (
+ df_new.string2 == 'foo') & (df_new.A > 0) & (df_new.B < 0)]
tm.assert_frame_equal(result, expected)
# yield an empty frame
- result = self.store.select('df', [ Term('string', '=', 'foo'), Term('string2=bar'), Term('A>0'), Term('B<0') ])
- expected = df_new[(df_new.string == 'foo') & (df_new.string2 == 'bar') & (df_new.A > 0) & (df_new.B < 0)]
+ result = self.store.select('df', [Term('string', '=', 'foo'), Term(
+ 'string2=bar'), Term('A>0'), Term('B<0')])
+ expected = df_new[(df_new.string == 'foo') & (
+ df_new.string2 == 'bar') & (df_new.A > 0) & (df_new.B < 0)]
tm.assert_frame_equal(result, expected)
# doc example
df_dc = df.copy()
df_dc['string'] = 'foo'
- df_dc.ix[4:6,'string'] = np.nan
- df_dc.ix[7:9,'string'] = 'bar'
+ df_dc.ix[4:6, 'string'] = np.nan
+ df_dc.ix[7:9, 'string'] = 'bar'
df_dc['string2'] = 'cool'
df_dc['datetime'] = Timestamp('20010102')
df_dc = df_dc.convert_objects()
- df_dc.ix[3:5,['A','B','datetime']] = np.nan
+ df_dc.ix[3:5, ['A', 'B', 'datetime']] = np.nan
self.store.remove('df_dc')
- self.store.append('df_dc', df_dc, data_columns = ['B','C','string','string2','datetime'])
- result = self.store.select('df_dc',[ Term('B>0') ])
+ self.store.append('df_dc', df_dc, data_columns=['B', 'C',
+ 'string', 'string2', 'datetime'])
+ result = self.store.select('df_dc', [Term('B>0')])
expected = df_dc[df_dc.B > 0]
tm.assert_frame_equal(result, expected)
- result = self.store.select('df_dc',[ 'B > 0', 'C > 0', 'string == foo' ])
- expected = df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == 'foo')]
+ result = self.store.select(
+ 'df_dc', ['B > 0', 'C > 0', 'string == foo'])
+ expected = df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (
+ df_dc.string == 'foo')]
tm.assert_frame_equal(result, expected)
def test_create_table_index(self):
- def col(t,column):
- return getattr(self.store.get_table(t).table.cols,column)
+ def col(t, column):
+ return getattr(self.store.get_table(t).table.cols, column)
# index=False
wp = tm.makePanel()
self.store.append('p5', wp, index=False)
- self.store.create_table_index('p5', columns = ['major_axis'])
- assert(col('p5','major_axis').is_indexed == True)
- assert(col('p5','minor_axis').is_indexed == False)
+ self.store.create_table_index('p5', columns=['major_axis'])
+ assert(col('p5', 'major_axis').is_indexed is True)
+ assert(col('p5', 'minor_axis').is_indexed is False)
# index=True
self.store.append('p5i', wp, index=True)
- assert(col('p5i','major_axis').is_indexed == True)
- assert(col('p5i','minor_axis').is_indexed == True)
+ assert(col('p5i', 'major_axis').is_indexed is True)
+ assert(col('p5i', 'minor_axis').is_indexed is True)
# default optlevels
self.store.get_table('p5').create_index()
- assert(col('p5','major_axis').index.optlevel == 6)
- assert(col('p5','minor_axis').index.kind == 'medium')
+ assert(col('p5', 'major_axis').index.optlevel == 6)
+ assert(col('p5', 'minor_axis').index.kind == 'medium')
# let's change the indexing scheme
self.store.create_table_index('p5')
- assert(col('p5','major_axis').index.optlevel == 6)
- assert(col('p5','minor_axis').index.kind == 'medium')
+ assert(col('p5', 'major_axis').index.optlevel == 6)
+ assert(col('p5', 'minor_axis').index.kind == 'medium')
self.store.create_table_index('p5', optlevel=9)
- assert(col('p5','major_axis').index.optlevel == 9)
- assert(col('p5','minor_axis').index.kind == 'medium')
+ assert(col('p5', 'major_axis').index.optlevel == 9)
+ assert(col('p5', 'minor_axis').index.kind == 'medium')
self.store.create_table_index('p5', kind='full')
- assert(col('p5','major_axis').index.optlevel == 9)
- assert(col('p5','minor_axis').index.kind == 'full')
+ assert(col('p5', 'major_axis').index.optlevel == 9)
+ assert(col('p5', 'minor_axis').index.kind == 'full')
self.store.create_table_index('p5', optlevel=1, kind='light')
- assert(col('p5','major_axis').index.optlevel == 1)
- assert(col('p5','minor_axis').index.kind == 'light')
-
+ assert(col('p5', 'major_axis').index.optlevel == 1)
+ assert(col('p5', 'minor_axis').index.kind == 'light')
+
# data columns
df = tm.makeTimeDataFrame()
df['string'] = 'foo'
df['string2'] = 'bar'
- self.store.append('f', df, data_columns=['string','string2'])
- assert(col('f','index').is_indexed == True)
- assert(col('f','string').is_indexed == True)
- assert(col('f','string2').is_indexed == True)
+ self.store.append('f', df, data_columns=['string', 'string2'])
+ assert(col('f', 'index').is_indexed is True)
+ assert(col('f', 'string').is_indexed is True)
+ assert(col('f', 'string2').is_indexed is True)
# specify index=columns
- self.store.append('f2', df, index=['string'], data_columns=['string','string2'])
- assert(col('f2','index').is_indexed == False)
- assert(col('f2','string').is_indexed == True)
- assert(col('f2','string2').is_indexed == False)
+ self.store.append(
+ 'f2', df, index=['string'], data_columns=['string', 'string2'])
+ assert(col('f2', 'index').is_indexed is False)
+ assert(col('f2', 'string').is_indexed is True)
+ assert(col('f2', 'string2').is_indexed is False)
# try to index a non-table
self.store.put('f2', df)
@@ -605,13 +637,13 @@ def col(t,column):
# test out some versions
original = tables.__version__
- for v in ['2.2','2.2b']:
+ for v in ['2.2', '2.2b']:
pytables._table_mod = None
pytables._table_supports_index = False
tables.__version__ = v
self.assertRaises(Exception, self.store.create_table_index, 'f')
- for v in ['2.3.1','2.3.1b','2.4dev','2.4',original]:
+ for v in ['2.3.1', '2.3.1b', '2.4dev', '2.4', original]:
pytables._table_mod = None
pytables._table_supports_index = False
tables.__version__ = v
@@ -619,13 +651,13 @@ def col(t,column):
pytables._table_mod = None
pytables._table_supports_index = False
tables.__version__ = original
-
def test_big_table_frame(self):
raise nose.SkipTest('no big table frame')
# create and write a big table
- df = DataFrame(np.random.randn(2000*100, 100), index = range(2000*100), columns = [ 'E%03d' % i for i in xrange(100) ])
+ df = DataFrame(np.random.randn(2000 * 100, 100), index=range(
+ 2000 * 100), columns=['E%03d' % i for i in xrange(100)])
for x in range(20):
df['String%03d' % x] = 'string%03d' % x
@@ -633,45 +665,46 @@ def test_big_table_frame(self):
x = time.time()
try:
store = HDFStore(self.scratchpath)
- store.append('df',df)
+ store.append('df', df)
rows = store.root.df.table.nrows
recons = store.select('df')
finally:
store.close()
os.remove(self.scratchpath)
- print "\nbig_table frame [%s] -> %5.2f" % (rows,time.time()-x)
-
+ print "\nbig_table frame [%s] -> %5.2f" % (rows, time.time() - x)
def test_big_table2_frame(self):
- # this is a really big table: 2.5m rows x 300 float columns, 20 string columns
+ # this is a really big table: 2.5m rows x 300 float columns, 20 string
+ # columns
raise nose.SkipTest('no big table2 frame')
# create and write a big table
print "\nbig_table2 start"
import time
start_time = time.time()
- df = DataFrame(np.random.randn(2.5*1000*1000, 300), index = range(int(2.5*1000*1000)), columns = [ 'E%03d' % i for i in xrange(300) ])
+ df = DataFrame(np.random.randn(2.5 * 1000 * 1000, 300), index=range(int(
+ 2.5 * 1000 * 1000)), columns=['E%03d' % i for i in xrange(300)])
for x in range(20):
df['String%03d' % x] = 'string%03d' % x
- print "\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f" % (len(df.index),time.time()-start_time)
+ print "\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f" % (len(df.index), time.time() - start_time)
fn = 'big_table2.h5'
try:
-
+
def f(chunksize):
- store = HDFStore(fn,mode = 'w')
- store.append('df',df,chunksize=chunksize)
+ store = HDFStore(fn, mode='w')
+ store.append('df', df, chunksize=chunksize)
r = store.root.df.table.nrows
store.close()
return r
- for c in [ 10000, 50000, 100000, 250000 ]:
+ for c in [10000, 50000, 100000, 250000]:
start_time = time.time()
print "big_table2 frame [chunk->%s]" % c
rows = f(c)
- print "big_table2 frame [rows->%s,chunk->%s] -> %5.2f" % (rows,c,time.time()-start_time)
+ print "big_table2 frame [rows->%s,chunk->%s] -> %5.2f" % (rows, c, time.time() - start_time)
finally:
os.remove(fn)
@@ -680,10 +713,11 @@ def test_big_table_panel(self):
raise nose.SkipTest('no big table panel')
# create and write a big table
- wp = Panel(np.random.randn(20, 1000, 1000), items= [ 'Item%03d' % i for i in xrange(20) ],
- major_axis=date_range('1/1/2000', periods=1000), minor_axis = [ 'E%03d' % i for i in xrange(1000) ])
+ wp = Panel(
+ np.random.randn(20, 1000, 1000), items=['Item%03d' % i for i in xrange(20)],
+ major_axis=date_range('1/1/2000', periods=1000), minor_axis=['E%03d' % i for i in xrange(1000)])
- wp.ix[:,100:200,300:400] = np.nan
+ wp.ix[:, 100:200, 300:400] = np.nan
for x in range(100):
wp['String%03d'] = 'string%03d' % x
@@ -692,14 +726,14 @@ def test_big_table_panel(self):
x = time.time()
try:
store = HDFStore(self.scratchpath)
- store.prof_append('wp',wp)
+ store.prof_append('wp', wp)
rows = store.root.wp.table.nrows
recons = store.select('wp')
finally:
store.close()
os.remove(self.scratchpath)
- print "\nbig_table panel [%s] -> %5.2f" % (rows,time.time()-x)
+ print "\nbig_table panel [%s] -> %5.2f" % (rows, time.time() - x)
def test_append_diff_item_order(self):
raise nose.SkipTest('append diff item order')
@@ -721,18 +755,18 @@ def test_append_hierarchical(self):
df = DataFrame(np.random.randn(10, 3), index=index,
columns=['A', 'B', 'C'])
- self.store.append('mi',df)
+ self.store.append('mi', df)
result = self.store.select('mi')
tm.assert_frame_equal(result, df)
def test_append_misc(self):
df = tm.makeDataFrame()
- self.store.append('df',df,chunksize=1)
+ self.store.append('df', df, chunksize=1)
result = self.store.select('df')
tm.assert_frame_equal(result, df)
- self.store.append('df1',df,expectedrows=10)
+ self.store.append('df1', df, expectedrows=10)
result = self.store.select('df1')
tm.assert_frame_equal(result, df)
@@ -746,11 +780,11 @@ def test_table_index_incompatible_dtypes(self):
table=True, append=True)
def test_table_values_dtypes_roundtrip(self):
- df1 = DataFrame({'a': [1, 2, 3]}, dtype = 'f8')
+ df1 = DataFrame({'a': [1, 2, 3]}, dtype='f8')
self.store.append('df1', df1)
assert df1.dtypes == self.store['df1'].dtypes
- df2 = DataFrame({'a': [1, 2, 3]}, dtype = 'i8')
+ df2 = DataFrame({'a': [1, 2, 3]}, dtype='i8')
self.store.append('df2', df2)
assert df2.dtypes == self.store['df2'].dtypes
@@ -770,9 +804,9 @@ def test_table_mixed_dtypes(self):
df['int2'] = 2
df['timestamp1'] = Timestamp('20010102')
df['timestamp2'] = Timestamp('20010103')
- df['datetime1'] = datetime.datetime(2001,1,2,0,0)
- df['datetime2'] = datetime.datetime(2001,1,3,0,0)
- df.ix[3:6,['obj1']] = np.nan
+ df['datetime1'] = datetime.datetime(2001, 1, 2, 0, 0)
+ df['datetime2'] = datetime.datetime(2001, 1, 3, 0, 0)
+ df.ix[3:6, ['obj1']] = np.nan
df = df.consolidate().convert_objects()
self.store.append('df1_mixed', df)
@@ -800,22 +834,23 @@ def test_table_mixed_dtypes(self):
wp['int1'] = 1
wp['int2'] = 2
wp = wp.consolidate()
-
+
self.store.append('p4d_mixed', wp)
tm.assert_panel4d_equal(self.store.select('p4d_mixed'), wp)
def test_unimplemented_dtypes_table_columns(self):
#### currently not supported dtypes ####
- for n,f in [ ('unicode',u'\u03c3'), ('date',datetime.date(2001,1,2)) ]:
+ for n, f in [('unicode', u'\u03c3'), ('date', datetime.date(2001, 1, 2))]:
df = tm.makeDataFrame()
df[n] = f
- self.assertRaises(NotImplementedError, self.store.append, 'df1_%s' % n, df)
+ self.assertRaises(
+ NotImplementedError, self.store.append, 'df1_%s' % n, df)
# frame
df = tm.makeDataFrame()
df['obj1'] = 'foo'
df['obj2'] = 'bar'
- df['datetime1'] = datetime.date(2001,1,2)
+ df['datetime1'] = datetime.date(2001, 1, 2)
df = df.consolidate().convert_objects()
# this fails because we have a date in the object block......
@@ -855,7 +890,7 @@ def test_remove(self):
def test_remove_where(self):
# non-existance
- crit1 = Term('index','>','foo')
+ crit1 = Term('index', '>', 'foo')
self.store.remove('a', where=[crit1])
# try to remove non-table (with crit)
@@ -864,8 +899,8 @@ def test_remove_where(self):
self.store.put('wp', wp, table=True)
self.store.remove('wp', [('minor_axis', ['A', 'D'])])
rs = self.store.select('wp')
- expected = wp.reindex(minor_axis = ['B','C'])
- tm.assert_panel_equal(rs,expected)
+ expected = wp.reindex(minor_axis=['B', 'C'])
+ tm.assert_panel_equal(rs, expected)
# empty where
self.store.remove('wp')
@@ -882,30 +917,29 @@ def test_remove_where(self):
'wp', ['foo'])
# selectin non-table with a where
- #self.store.put('wp2', wp, table=False)
- #self.assertRaises(Exception, self.store.remove,
+ # self.store.put('wp2', wp, table=False)
+ # self.assertRaises(Exception, self.store.remove,
# 'wp2', [('column', ['A', 'D'])])
-
def test_remove_crit(self):
wp = tm.makePanel()
# group row removal
- date4 = wp.major_axis.take([ 0,1,2,4,5,6,8,9,10 ])
- crit4 = Term('major_axis',date4)
+ date4 = wp.major_axis.take([0, 1, 2, 4, 5, 6, 8, 9, 10])
+ crit4 = Term('major_axis', date4)
self.store.put('wp3', wp, table=True)
n = self.store.remove('wp3', where=[crit4])
assert(n == 36)
result = self.store.select('wp3')
- expected = wp.reindex(major_axis = wp.major_axis-date4)
+ expected = wp.reindex(major_axis=wp.major_axis - date4)
tm.assert_panel_equal(result, expected)
# upper half
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = Term('major_axis','>',date)
- crit2 = Term('minor_axis',['A', 'D'])
+ crit1 = Term('major_axis', '>', date)
+ crit2 = Term('minor_axis', ['A', 'D'])
n = self.store.remove('wp', where=[crit1])
assert(n == 56)
@@ -921,32 +955,35 @@ def test_remove_crit(self):
self.store.put('wp2', wp, table=True)
date1 = wp.major_axis[1:3]
- crit1 = Term('major_axis',date1)
+ crit1 = Term('major_axis', date1)
self.store.remove('wp2', where=[crit1])
result = self.store.select('wp2')
- expected = wp.reindex(major_axis=wp.major_axis-date1)
+ expected = wp.reindex(major_axis=wp.major_axis - date1)
tm.assert_panel_equal(result, expected)
date2 = wp.major_axis[5]
- crit2 = Term('major_axis',date2)
+ crit2 = Term('major_axis', date2)
self.store.remove('wp2', where=[crit2])
result = self.store['wp2']
- expected = wp.reindex(major_axis=wp.major_axis-date1-Index([date2]))
+ expected = wp.reindex(
+ major_axis=wp.major_axis - date1 - Index([date2]))
tm.assert_panel_equal(result, expected)
- date3 = [wp.major_axis[7],wp.major_axis[9]]
- crit3 = Term('major_axis',date3)
+ date3 = [wp.major_axis[7], wp.major_axis[9]]
+ crit3 = Term('major_axis', date3)
self.store.remove('wp2', where=[crit3])
result = self.store['wp2']
- expected = wp.reindex(major_axis=wp.major_axis-date1-Index([date2])-Index(date3))
+ expected = wp.reindex(
+ major_axis=wp.major_axis - date1 - Index([date2]) - Index(date3))
tm.assert_panel_equal(result, expected)
# corners
self.store.put('wp4', wp, table=True)
- n = self.store.remove('wp4', where=[Term('major_axis','>',wp.major_axis[-1])])
+ n = self.store.remove(
+ 'wp4', where=[Term('major_axis', '>', wp.major_axis[-1])])
result = self.store.select('wp4')
tm.assert_panel_equal(result, wp)
-
+
def test_terms(self):
wp = tm.makePanel()
@@ -956,10 +993,10 @@ def test_terms(self):
# some invalid terms
terms = [
- [ 'minor', ['A','B'] ],
- [ 'index', ['20121114'] ],
- [ 'index', ['20121114', '20121114'] ],
- ]
+ ['minor', ['A', 'B']],
+ ['index', ['20121114']],
+ ['index', ['20121114', '20121114']],
+ ]
for t in terms:
self.assertRaises(Exception, self.store.select, 'wp', t)
@@ -970,44 +1007,48 @@ def test_terms(self):
self.assertRaises(Exception, Term.__init__, 'index', '>', 5)
# panel
- result = self.store.select('wp',[ Term('major_axis<20000108'), Term('minor_axis', '=', ['A','B']) ])
+ result = self.store.select('wp', [Term(
+ 'major_axis<20000108'), Term('minor_axis', '=', ['A', 'B'])])
expected = wp.truncate(after='20000108').reindex(minor=['A', 'B'])
tm.assert_panel_equal(result, expected)
# p4d
- result = self.store.select('p4d',[ Term('major_axis<20000108'), Term('minor_axis', '=', ['A','B']), Term('items', '=', ['ItemA','ItemB']) ])
- expected = p4d.truncate(after='20000108').reindex(minor=['A', 'B'],items=['ItemA','ItemB'])
+ result = self.store.select('p4d', [Term('major_axis<20000108'),
+ Term('minor_axis', '=', ['A', 'B']),
+ Term('items', '=', ['ItemA', 'ItemB'])])
+ expected = p4d.truncate(after='20000108').reindex(
+ minor=['A', 'B'], items=['ItemA', 'ItemB'])
tm.assert_panel4d_equal(result, expected)
# valid terms
terms = [
- dict(field = 'major_axis', op = '>', value = '20121114'),
+ dict(field='major_axis', op='>', value='20121114'),
('major_axis', '20121114'),
('major_axis', '>', '20121114'),
- (('major_axis', ['20121114','20121114']),),
- ('major_axis', datetime.datetime(2012,11,14)),
+ (('major_axis', ['20121114', '20121114']),),
+ ('major_axis', datetime.datetime(2012, 11, 14)),
'major_axis> 20121114',
'major_axis >20121114',
'major_axis > 20121114',
- (('minor_axis', ['A','B']),),
- (('minor_axis', ['A','B']),),
- ((('minor_axis', ['A','B']),),),
- (('items', ['ItemA','ItemB']),),
+ (('minor_axis', ['A', 'B']),),
+ (('minor_axis', ['A', 'B']),),
+ ((('minor_axis', ['A', 'B']),),),
+ (('items', ['ItemA', 'ItemB']),),
('items=ItemA'),
- ]
+ ]
for t in terms:
- self.store.select('wp', t)
- self.store.select('p4d', t)
+ self.store.select('wp', t)
+ self.store.select('p4d', t)
# valid for p4d only
terms = [
- (('labels', '=', ['l1','l2']),),
- Term('labels', '=', ['l1','l2']),
- ]
+ (('labels', '=', ['l1', 'l2']),),
+ Term('labels', '=', ['l1', 'l2']),
+ ]
for t in terms:
- self.store.select('p4d', t)
+ self.store.select('p4d', t)
def test_series(self):
s = tm.makeStringSeries()
@@ -1079,7 +1120,7 @@ def test_float_index(self):
def test_tuple_index(self):
# GH #492
col = np.arange(10)
- idx = [(0.,1.), (2., 3.), (4., 5.)]
+ idx = [(0., 1.), (2., 3.), (4., 5.)]
data = np.random.randn(30).reshape((3, 10))
DF = DataFrame(data, index=idx, columns=col)
self._check_roundtrip(DF, tm.assert_frame_equal)
@@ -1087,7 +1128,7 @@ def test_tuple_index(self):
def test_index_types(self):
values = np.random.randn(2)
- func = lambda l, r : tm.assert_series_equal(l, r, True, True, True)
+ func = lambda l, r: tm.assert_series_equal(l, r, True, True, True)
ser = Series(values, [0, 'y'])
self._check_roundtrip(ser, func)
@@ -1110,7 +1151,8 @@ def test_index_types(self):
ser = Series(values, [1, 5])
self._check_roundtrip(ser, func)
- ser = Series(values, [datetime.datetime(2012, 1, 1), datetime.datetime(2012, 1, 2)])
+ ser = Series(values, [datetime.datetime(
+ 2012, 1, 1), datetime.datetime(2012, 1, 2)])
self._check_roundtrip(ser, func)
def test_timeseries_preepoch(self):
@@ -1137,7 +1179,7 @@ def test_frame(self):
self._check_roundtrip_table(df, tm.assert_frame_equal,
compression=True)
self._check_roundtrip(df, tm.assert_frame_equal,
- compression=True)
+ compression=True)
tdf = tm.makeTimeDataFrame()
self._check_roundtrip(tdf, tm.assert_frame_equal)
@@ -1337,45 +1379,46 @@ def test_select(self):
self.store.select('wp2')
# selection on the non-indexable with a large number of columns
- wp = Panel(np.random.randn(100, 100, 100), items = [ 'Item%03d' % i for i in xrange(100) ],
- major_axis=date_range('1/1/2000', periods=100), minor_axis = [ 'E%03d' % i for i in xrange(100) ])
+ wp = Panel(
+ np.random.randn(100, 100, 100), items=['Item%03d' % i for i in xrange(100)],
+ major_axis=date_range('1/1/2000', periods=100), minor_axis=['E%03d' % i for i in xrange(100)])
self.store.remove('wp')
self.store.append('wp', wp)
- items = [ 'Item%03d' % i for i in xrange(80) ]
+ items = ['Item%03d' % i for i in xrange(80)]
result = self.store.select('wp', Term('items', items))
- expected = wp.reindex(items = items)
+ expected = wp.reindex(items=items)
tm.assert_panel_equal(expected, result)
# selectin non-table with a where
- #self.assertRaises(Exception, self.store.select,
+ # self.assertRaises(Exception, self.store.select,
# 'wp2', ('column', ['A', 'D']))
# select with columns=
df = tm.makeTimeDataFrame()
self.store.remove('df')
- self.store.append('df',df)
- result = self.store.select('df', columns = ['A','B'])
- expected = df.reindex(columns = ['A','B'])
+ self.store.append('df', df)
+ result = self.store.select('df', columns=['A', 'B'])
+ expected = df.reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
# equivalentsly
- result = self.store.select('df', [ ('columns', ['A','B']) ])
- expected = df.reindex(columns = ['A','B'])
+ result = self.store.select('df', [('columns', ['A', 'B'])])
+ expected = df.reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
# with a data column
self.store.remove('df')
- self.store.append('df',df, data_columns = ['A'])
- result = self.store.select('df', [ 'A > 0' ], columns = ['A','B'])
- expected = df[df.A > 0].reindex(columns = ['A','B'])
+ self.store.append('df', df, data_columns=['A'])
+ result = self.store.select('df', ['A > 0'], columns=['A', 'B'])
+ expected = df[df.A > 0].reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
# with a data column, but different columns
self.store.remove('df')
- self.store.append('df',df, data_columns = ['A'])
- result = self.store.select('df', [ 'A > 0' ], columns = ['C','D'])
- expected = df[df.A > 0].reindex(columns = ['C','D'])
+ self.store.append('df', df, data_columns=['A'])
+ result = self.store.select('df', ['A > 0'], columns=['C', 'D'])
+ expected = df[df.A > 0].reindex(columns=['C', 'D'])
tm.assert_frame_equal(expected, result)
def test_panel_select(self):
@@ -1383,14 +1426,15 @@ def test_panel_select(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = ('major_axis','>=',date)
+ crit1 = ('major_axis', '>=', date)
crit2 = ('minor_axis', '=', ['A', 'D'])
result = self.store.select('wp', [crit1, crit2])
expected = wp.truncate(before=date).reindex(minor=['A', 'D'])
tm.assert_panel_equal(result, expected)
- result = self.store.select('wp', [ 'major_axis>=20000124', ('minor_axis', '=', ['A','B']) ])
+ result = self.store.select(
+ 'wp', ['major_axis>=20000124', ('minor_axis', '=', ['A', 'B'])])
expected = wp.truncate(before='20000124').reindex(minor=['A', 'B'])
tm.assert_panel_equal(result, expected)
@@ -1399,9 +1443,9 @@ def test_frame_select(self):
self.store.put('frame', df, table=True)
date = df.index[len(df) // 2]
- crit1 = ('index','>=',date)
- crit2 = ('columns',['A', 'D'])
- crit3 = ('columns','A')
+ crit1 = ('index', '>=', date)
+ crit2 = ('columns', ['A', 'D'])
+ crit3 = ('columns', 'A')
result = self.store.select('frame', [crit1, crit2])
expected = df.ix[date:, ['A', 'D']]
@@ -1414,28 +1458,31 @@ def test_frame_select(self):
# other indicies for a frame
# integer
- df = DataFrame(dict(A = np.random.rand(20), B = np.random.rand(20)))
+ df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
self.store.append('df_int', df)
- self.store.select('df_int', [ Term("index<10"), Term("columns", "=", ["A"]) ])
+ self.store.select(
+ 'df_int', [Term("index<10"), Term("columns", "=", ["A"])])
- df = DataFrame(dict(A = np.random.rand(20), B = np.random.rand(20), index = np.arange(20,dtype='f8')))
+ df = DataFrame(dict(A=np.random.rand(
+ 20), B=np.random.rand(20), index=np.arange(20, dtype='f8')))
self.store.append('df_float', df)
- self.store.select('df_float', [ Term("index<10.0"), Term("columns", "=", ["A"]) ])
+ self.store.select(
+ 'df_float', [Term("index<10.0"), Term("columns", "=", ["A"])])
# invalid terms
df = tm.makeTimeDataFrame()
self.store.append('df_time', df)
- self.assertRaises(Exception, self.store.select, 'df_time', [ Term("index>0") ])
+ self.assertRaises(
+ Exception, self.store.select, 'df_time', [Term("index>0")])
# can't select if not written as table
- #self.store['frame'] = df
- #self.assertRaises(Exception, self.store.select,
+ # self.store['frame'] = df
+ # self.assertRaises(Exception, self.store.select,
# 'frame', [crit1, crit2])
def test_unique(self):
df = tm.makeTimeDataFrame()
-
def check(x, y):
self.assert_((np.unique(x) == np.unique(y)).all() == True)
@@ -1443,29 +1490,30 @@ def check(x, y):
self.store.append('df', df)
# error
- self.assertRaises(KeyError, self.store.unique, 'df','foo')
+ self.assertRaises(KeyError, self.store.unique, 'df', 'foo')
# valid
- result = self.store.unique('df','index')
- check(result.values,df.index.values)
+ result = self.store.unique('df', 'index')
+ check(result.values, df.index.values)
# not a data indexable column
- self.assertRaises(ValueError, self.store.unique, 'df','values_block_0')
+ self.assertRaises(
+ ValueError, self.store.unique, 'df', 'values_block_0')
# a data column
df2 = df.copy()
df2['string'] = 'foo'
- self.store.append('df2',df2,data_columns = ['string'])
- result = self.store.unique('df2','string')
- check(result.values,df2['string'].unique())
+ self.store.append('df2', df2, data_columns=['string'])
+ result = self.store.unique('df2', 'string')
+ check(result.values, df2['string'].unique())
# a data column with NaNs, result excludes the NaNs
df3 = df.copy()
df3['string'] = 'foo'
- df3.ix[4:6,'string'] = np.nan
- self.store.append('df3',df3,data_columns = ['string'])
- result = self.store.unique('df3','string')
- check(result.values,df3['string'].valid().unique())
+ df3.ix[4:6, 'string'] = np.nan
+ self.store.append('df3', df3, data_columns=['string'])
+ result = self.store.unique('df3', 'string')
+ check(result.values, df3['string'].valid().unique())
def test_coordinates(self):
df = tm.makeTimeDataFrame()
@@ -1480,100 +1528,113 @@ def test_coordinates(self):
# get coordinates back & test vs frame
self.store.remove('df')
- df = DataFrame(dict(A = range(5), B = range(5)))
+ df = DataFrame(dict(A=range(5), B=range(5)))
self.store.append('df', df)
- c = self.store.select_as_coordinates('df',[ 'index<3' ])
+ c = self.store.select_as_coordinates('df', ['index<3'])
assert((c.values == np.arange(3)).all() == True)
- result = self.store.select('df', where = c)
- expected = df.ix[0:2,:]
- tm.assert_frame_equal(result,expected)
+ result = self.store.select('df', where=c)
+ expected = df.ix[0:2, :]
+ tm.assert_frame_equal(result, expected)
- c = self.store.select_as_coordinates('df', [ 'index>=3', 'index<=4' ])
- assert((c.values == np.arange(2)+3).all() == True)
- result = self.store.select('df', where = c)
- expected = df.ix[3:4,:]
- tm.assert_frame_equal(result,expected)
+ c = self.store.select_as_coordinates('df', ['index>=3', 'index<=4'])
+ assert((c.values == np.arange(2) + 3).all() == True)
+ result = self.store.select('df', where=c)
+ expected = df.ix[3:4, :]
+ tm.assert_frame_equal(result, expected)
# multiple tables
self.store.remove('df1')
self.store.remove('df2')
df1 = tm.makeTimeDataFrame()
- df2 = tm.makeTimeDataFrame().rename(columns = lambda x: "%s_2" % x)
- self.store.append('df1',df1, data_columns = ['A','B'])
- self.store.append('df2',df2)
+ df2 = tm.makeTimeDataFrame().rename(columns=lambda x: "%s_2" % x)
+ self.store.append('df1', df1, data_columns=['A', 'B'])
+ self.store.append('df2', df2)
- c = self.store.select_as_coordinates('df1', [ 'A>0','B>0' ])
- df1_result = self.store.select('df1',c)
- df2_result = self.store.select('df2',c)
- result = concat([ df1_result, df2_result ], axis=1)
+ c = self.store.select_as_coordinates('df1', ['A>0', 'B>0'])
+ df1_result = self.store.select('df1', c)
+ df2_result = self.store.select('df2', c)
+ result = concat([df1_result, df2_result], axis=1)
- expected = concat([ df1, df2 ], axis=1)
+ expected = concat([df1, df2], axis=1)
expected = expected[(expected.A > 0) & (expected.B > 0)]
tm.assert_frame_equal(result, expected)
def test_append_to_multiple(self):
df1 = tm.makeTimeDataFrame()
- df2 = tm.makeTimeDataFrame().rename(columns = lambda x: "%s_2" % x)
+ df2 = tm.makeTimeDataFrame().rename(columns=lambda x: "%s_2" % x)
df2['foo'] = 'bar'
- df = concat([ df1, df2 ], axis=1)
+ df = concat([df1, df2], axis=1)
# exceptions
- self.assertRaises(Exception, self.store.append_to_multiple, { 'df1' : ['A','B'], 'df2' : None }, df, selector = 'df3')
- self.assertRaises(Exception, self.store.append_to_multiple, { 'df1' : None, 'df2' : None }, df, selector = 'df3')
- self.assertRaises(Exception, self.store.append_to_multiple, 'df1', df, 'df1')
+ self.assertRaises(Exception, self.store.append_to_multiple, {'df1':
+ ['A', 'B'], 'df2': None}, df, selector='df3')
+ self.assertRaises(Exception, self.store.append_to_multiple,
+ {'df1': None, 'df2': None}, df, selector='df3')
+ self.assertRaises(
+ Exception, self.store.append_to_multiple, 'df1', df, 'df1')
# regular operation
- self.store.append_to_multiple({ 'df1' : ['A','B'], 'df2' : None }, df, selector = 'df1')
- result = self.store.select_as_multiple(['df1','df2'], where = [ 'A>0','B>0' ], selector = 'df1')
+ self.store.append_to_multiple(
+ {'df1': ['A', 'B'], 'df2': None}, df, selector='df1')
+ result = self.store.select_as_multiple(
+ ['df1', 'df2'], where=['A>0', 'B>0'], selector='df1')
expected = df[(df.A > 0) & (df.B > 0)]
tm.assert_frame_equal(result, expected)
-
def test_select_as_multiple(self):
df1 = tm.makeTimeDataFrame()
- df2 = tm.makeTimeDataFrame().rename(columns = lambda x: "%s_2" % x)
+ df2 = tm.makeTimeDataFrame().rename(columns=lambda x: "%s_2" % x)
df2['foo'] = 'bar'
- self.store.append('df1',df1, data_columns = ['A','B'])
- self.store.append('df2',df2)
+ self.store.append('df1', df1, data_columns=['A', 'B'])
+ self.store.append('df2', df2)
# exceptions
- self.assertRaises(Exception, self.store.select_as_multiple, None, where = [ 'A>0','B>0' ], selector = 'df1')
- self.assertRaises(Exception, self.store.select_as_multiple, [ None ], where = [ 'A>0','B>0' ], selector = 'df1')
+ self.assertRaises(Exception, self.store.select_as_multiple,
+ None, where=['A>0', 'B>0'], selector='df1')
+ self.assertRaises(Exception, self.store.select_as_multiple,
+ [None], where=['A>0', 'B>0'], selector='df1')
# default select
- result = self.store.select('df1', ['A>0','B>0'])
- expected = self.store.select_as_multiple([ 'df1' ], where = [ 'A>0','B>0' ], selector = 'df1')
+ result = self.store.select('df1', ['A>0', 'B>0'])
+ expected = self.store.select_as_multiple(
+ ['df1'], where=['A>0', 'B>0'], selector='df1')
tm.assert_frame_equal(result, expected)
- expected = self.store.select_as_multiple( 'df1' , where = [ 'A>0','B>0' ], selector = 'df1')
+ expected = self.store.select_as_multiple(
+ 'df1', where=['A>0', 'B>0'], selector='df1')
tm.assert_frame_equal(result, expected)
# multiple
- result = self.store.select_as_multiple(['df1','df2'], where = [ 'A>0','B>0' ], selector = 'df1')
- expected = concat([ df1, df2 ], axis=1)
+ result = self.store.select_as_multiple(
+ ['df1', 'df2'], where=['A>0', 'B>0'], selector='df1')
+ expected = concat([df1, df2], axis=1)
expected = expected[(expected.A > 0) & (expected.B > 0)]
tm.assert_frame_equal(result, expected)
# multiple (diff selector)
- result = self.store.select_as_multiple(['df1','df2'], where = [ Term('index', '>', df2.index[4]) ], selector = 'df2')
- expected = concat([ df1, df2 ], axis=1)
+ result = self.store.select_as_multiple(['df1', 'df2'], where=[Term(
+ 'index', '>', df2.index[4])], selector='df2')
+ expected = concat([df1, df2], axis=1)
expected = expected[5:]
tm.assert_frame_equal(result, expected)
# test excpection for diff rows
- self.store.append('df3',tm.makeTimeDataFrame(nper=50))
- self.assertRaises(Exception, self.store.select_as_multiple, ['df1','df3'], where = [ 'A>0','B>0' ], selector = 'df1')
+ self.store.append('df3', tm.makeTimeDataFrame(nper=50))
+ self.assertRaises(Exception, self.store.select_as_multiple, ['df1',
+ 'df3'], where=['A>0', 'B>0'], selector='df1')
def test_start_stop(self):
-
- df = DataFrame(dict(A = np.random.rand(20), B = np.random.rand(20)))
+
+ df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
self.store.append('df', df)
- result = self.store.select('df', [ Term("columns", "=", ["A"]) ], start=0, stop=5)
- expected = df.ix[0:4,['A']]
+ result = self.store.select(
+ 'df', [Term("columns", "=", ["A"])], start=0, stop=5)
+ expected = df.ix[0:4, ['A']]
tm.assert_frame_equal(result, expected)
# out of range
- result = self.store.select('df', [ Term("columns", "=", ["A"]) ], start=30, stop=40)
+ result = self.store.select(
+ 'df', [Term("columns", "=", ["A"])], start=30, stop=40)
assert(len(result) == 0)
assert(type(result) == DataFrame)
@@ -1652,12 +1713,13 @@ def test_legacy_table_read(self):
store.select('wp1')
# force the frame
- store.select('df2', typ = 'legacy_frame')
+ store.select('df2', typ='legacy_frame')
# old version warning
import warnings
warnings.filterwarnings('ignore', category=IncompatibilityWarning)
- self.assertRaises(Exception, store.select, 'wp1', Term('minor_axis','=','B'))
+ self.assertRaises(
+ Exception, store.select, 'wp1', Term('minor_axis', '=', 'B'))
df2 = store.select('df2')
store.select('df2', Term('index', '>', df2.index[2]))
@@ -1681,10 +1743,10 @@ def test_legacy_table_write(self):
wp = tm.makePanel()
store = HDFStore(os.path.join(pth, 'legacy_table.h5'), 'a')
-
+
self.assertRaises(Exception, store.append, 'df1', df)
self.assertRaises(Exception, store.append, 'wp1', wp)
-
+
store.close()
def test_store_datetime_fractional_secs(self):
@@ -1738,12 +1800,13 @@ def test_unicode_index(self):
self._check_roundtrip(s, tm.assert_series_equal)
def test_store_datetime_mixed(self):
- df = DataFrame({'a': [1,2,3], 'b': [1.,2.,3.], 'c': ['a', 'b', 'c']})
+ df = DataFrame(
+ {'a': [1, 2, 3], 'b': [1., 2., 3.], 'c': ['a', 'b', 'c']})
ts = tm.makeTimeSeries()
df['d'] = ts.index[:3]
self._check_roundtrip(df, tm.assert_frame_equal)
- #def test_cant_write_multiindex_table(self):
+ # def test_cant_write_multiindex_table(self):
# # for now, #1848
# df = DataFrame(np.random.randn(10, 4),
# index=[np.arange(5).repeat(2),
@@ -1751,10 +1814,12 @@ def test_store_datetime_mixed(self):
# self.assertRaises(Exception, self.store.put, 'foo', df, table=True)
+
def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
return pth
+
def _test_sort(obj):
if isinstance(obj, DataFrame):
return obj.reindex(sorted(obj.index))
@@ -1765,5 +1830,5 @@ def _test_sort(obj):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index a828c946686c5..b2348ec390648 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -9,8 +9,10 @@
import pandas.util.testing as tm
from pandas import Series, Index, DataFrame
+
class TestSQLite(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.db = sqlite3.connect(':memory:')
@@ -133,8 +135,6 @@ def _check_roundtrip(self, frame):
expected.index = Index(range(len(frame2))) + 10
tm.assert_frame_equal(expected, result)
-
-
def test_tquery(self):
frame = tm.makeTimeDataFrame()
sql.write_frame(frame, name='test_table', con=self.db)
@@ -174,14 +174,14 @@ def test_uquery(self):
def test_keyword_as_column_names(self):
'''
'''
- df = DataFrame({'From':np.ones(5)})
- #print sql.get_sqlite_schema(df, 'testkeywords')
- sql.write_frame(df, con = self.db, name = 'testkeywords')
+ df = DataFrame({'From': np.ones(5)})
+ # print sql.get_sqlite_schema(df, 'testkeywords')
+ sql.write_frame(df, con=self.db, name='testkeywords')
if __name__ == '__main__':
# unittest.main()
import nose
# nose.runmodule(argv=[__file__,'-vvs','-x', '--pdb-failure'],
# exit=False)
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/tests/test_wb.py b/pandas/io/tests/test_wb.py
index 9d854e63fee0e..46eeabaf1e209 100644
--- a/pandas/io/tests/test_wb.py
+++ b/pandas/io/tests/test_wb.py
@@ -6,35 +6,37 @@
from numpy.testing.decorators import slow
from pandas.io.wb import search, download
+
@slow
@network
def test_wdi_search():
raise nose.SkipTest
expected = {u'id': {2634: u'GDPPCKD',
- 4649: u'NY.GDP.PCAP.KD',
- 4651: u'NY.GDP.PCAP.KN',
- 4653: u'NY.GDP.PCAP.PP.KD'},
- u'name': {2634: u'GDP per Capita, constant US$, millions',
- 4649: u'GDP per capita (constant 2000 US$)',
- 4651: u'GDP per capita (constant LCU)',
- 4653: u'GDP per capita, PPP (constant 2005 international $)'}}
- result = search('gdp.*capita.*constant').ix[:,:2]
+ 4649: u'NY.GDP.PCAP.KD',
+ 4651: u'NY.GDP.PCAP.KN',
+ 4653: u'NY.GDP.PCAP.PP.KD'},
+ u'name': {2634: u'GDP per Capita, constant US$, millions',
+ 4649: u'GDP per capita (constant 2000 US$)',
+ 4651: u'GDP per capita (constant LCU)',
+ 4653: u'GDP per capita, PPP (constant 2005 international $)'}}
+ result = search('gdp.*capita.*constant').ix[:, :2]
expected = pandas.DataFrame(expected)
expected.index = result.index
assert_frame_equal(result, expected)
+
@slow
@network
def test_wdi_download():
raise nose.SkipTest
expected = {'GDPPCKN': {(u'United States', u'2003'): u'40800.0735367688', (u'Canada', u'2004'): u'37857.1261134552', (u'United States', u'2005'): u'42714.8594790102', (u'Canada', u'2003'): u'37081.4575704003', (u'United States', u'2004'): u'41826.1728310667', (u'Mexico', u'2003'): u'72720.0691255285', (u'Mexico', u'2004'): u'74751.6003347038', (u'Mexico', u'2005'): u'76200.2154469437', (u'Canada', u'2005'): u'38617.4563629611'}, 'GDPPCKD': {(u'United States', u'2003'): u'40800.0735367688', (u'Canada', u'2004'): u'34397.055116118', (u'United States', u'2005'): u'42714.8594790102', (u'Canada', u'2003'): u'33692.2812368928', (u'United States', u'2004'): u'41826.1728310667', (u'Mexico', u'2003'): u'7608.43848670658', (u'Mexico', u'2004'): u'7820.99026814334', (u'Mexico', u'2005'): u'7972.55364129367', (u'Canada', u'2005'): u'35087.8925933298'}}
expected = pandas.DataFrame(expected)
- result = download(country=['CA','MX','US', 'junk'], indicator=['GDPPCKD',
- 'GDPPCKN', 'junk'], start=2003, end=2005)
+ result = download(country=['CA', 'MX', 'US', 'junk'], indicator=['GDPPCKD',
+ 'GDPPCKN', 'junk'], start=2003, end=2005)
expected.index = result.index
assert_frame_equal(result, pandas.DataFrame(expected))
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/tests/test_yahoo.py b/pandas/io/tests/test_yahoo.py
index 9b15e19dd5410..89c650316468c 100644
--- a/pandas/io/tests/test_yahoo.py
+++ b/pandas/io/tests/test_yahoo.py
@@ -11,6 +11,7 @@
from numpy.testing.decorators import slow
import urllib2
+
class TestYahoo(unittest.TestCase):
@slow
@@ -19,8 +20,8 @@ def test_yahoo(self):
# asserts that yahoo is minimally working and that it throws
# an excecption when DataReader can't get a 200 response from
# yahoo
- start = datetime(2010,1,1)
- end = datetime(2012,1,24)
+ start = datetime(2010, 1, 1)
+ end = datetime(2012, 1, 24)
try:
self.assertEquals(
@@ -41,5 +42,5 @@ def test_yahoo(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/io/wb.py b/pandas/io/wb.py
index 1270d9b0dd28f..1a2108d069589 100644
--- a/pandas/io/wb.py
+++ b/pandas/io/wb.py
@@ -4,46 +4,47 @@
import pandas
import numpy as np
-def download(country=['MX','CA','US'], indicator=['GDPPCKD','GDPPCKN'],
- start=2003, end=2005):
+
+def download(country=['MX', 'CA', 'US'], indicator=['GDPPCKD', 'GDPPCKN'],
+ start=2003, end=2005):
"""
Download data series from the World Bank's World Development Indicators
Parameters
----------
-
- indicator: string or list of strings
+
+ indicator: string or list of strings
taken from the ``id`` field in ``WDIsearch()``
- country: string or list of strings.
- ``all`` downloads data for all countries
+ country: string or list of strings.
+ ``all`` downloads data for all countries
ISO-2 character codes select individual countries (e.g.``US``,``CA``)
- start: int
+ start: int
First year of the data series
- end: int
+ end: int
Last year of the data series (inclusive)
-
+
Returns
-------
- ``pandas`` DataFrame with columns: country, iso2c, year, indicator value.
+ ``pandas`` DataFrame with columns: country, iso2c, year, indicator value.
"""
# Are ISO-2 country codes valid?
valid_countries = ["AG", "AL", "AM", "AO", "AR", "AT", "AU", "AZ", "BB",
- "BD", "BE", "BF", "BG", "BH", "BI", "BJ", "BO", "BR", "BS", "BW",
- "BY", "BZ", "CA", "CD", "CF", "CG", "CH", "CI", "CL", "CM", "CN",
- "CO", "CR", "CV", "CY", "CZ", "DE", "DK", "DM", "DO", "DZ", "EC",
- "EE", "EG", "ER", "ES", "ET", "FI", "FJ", "FR", "GA", "GB", "GE",
- "GH", "GM", "GN", "GQ", "GR", "GT", "GW", "GY", "HK", "HN", "HR",
- "HT", "HU", "ID", "IE", "IL", "IN", "IR", "IS", "IT", "JM", "JO",
- "JP", "KE", "KG", "KH", "KM", "KR", "KW", "KZ", "LA", "LB", "LC",
- "LK", "LS", "LT", "LU", "LV", "MA", "MD", "MG", "MK", "ML", "MN",
- "MR", "MU", "MW", "MX", "MY", "MZ", "NA", "NE", "NG", "NI", "NL",
- "NO", "NP", "NZ", "OM", "PA", "PE", "PG", "PH", "PK", "PL", "PT",
- "PY", "RO", "RU", "RW", "SA", "SB", "SC", "SD", "SE", "SG", "SI",
- "SK", "SL", "SN", "SR", "SV", "SY", "SZ", "TD", "TG", "TH", "TN",
- "TR", "TT", "TW", "TZ", "UA", "UG", "US", "UY", "UZ", "VC", "VE",
- "VN", "VU", "YE", "ZA", "ZM", "ZW", "all"]
+ "BD", "BE", "BF", "BG", "BH", "BI", "BJ", "BO", "BR", "BS", "BW",
+ "BY", "BZ", "CA", "CD", "CF", "CG", "CH", "CI", "CL", "CM", "CN",
+ "CO", "CR", "CV", "CY", "CZ", "DE", "DK", "DM", "DO", "DZ", "EC",
+ "EE", "EG", "ER", "ES", "ET", "FI", "FJ", "FR", "GA", "GB", "GE",
+ "GH", "GM", "GN", "GQ", "GR", "GT", "GW", "GY", "HK", "HN", "HR",
+ "HT", "HU", "ID", "IE", "IL", "IN", "IR", "IS", "IT", "JM", "JO",
+ "JP", "KE", "KG", "KH", "KM", "KR", "KW", "KZ", "LA", "LB", "LC",
+ "LK", "LS", "LT", "LU", "LV", "MA", "MD", "MG", "MK", "ML", "MN",
+ "MR", "MU", "MW", "MX", "MY", "MZ", "NA", "NE", "NG", "NI", "NL",
+ "NO", "NP", "NZ", "OM", "PA", "PE", "PG", "PH", "PK", "PL", "PT",
+ "PY", "RO", "RU", "RW", "SA", "SB", "SC", "SD", "SE", "SG", "SI",
+ "SK", "SL", "SN", "SR", "SV", "SY", "SZ", "TD", "TG", "TH", "TN",
+ "TR", "TT", "TW", "TZ", "UA", "UG", "US", "UY", "UZ", "VC", "VE",
+ "VN", "VU", "YE", "ZA", "ZM", "ZW", "all"]
if type(country) == str:
country = [country]
bad_countries = np.setdiff1d(country, valid_countries)
@@ -51,17 +52,17 @@ def download(country=['MX','CA','US'], indicator=['GDPPCKD','GDPPCKN'],
country = ';'.join(country)
# Work with a list of indicators
if type(indicator) == str:
- indicator = [indicator]
+ indicator = [indicator]
# Download
- data = []
+ data = []
bad_indicators = []
for ind in indicator:
- try:
+ try:
tmp = _get_data(ind, country, start, end)
tmp.columns = ['country', 'iso2c', 'year', ind]
data.append(tmp)
except:
- bad_indicators.append(ind)
+ bad_indicators.append(ind)
# Warn
if len(bad_indicators) > 0:
print 'Failed to obtain indicator(s): ' + '; '.join(bad_indicators)
@@ -70,15 +71,15 @@ def download(country=['MX','CA','US'], indicator=['GDPPCKD','GDPPCKN'],
print 'Invalid ISO-2 codes: ' + ' '.join(bad_countries)
# Merge WDI series
if len(data) > 0:
- out = reduce(lambda x,y: x.merge(y, how='outer'), data)
+ out = reduce(lambda x, y: x.merge(y, how='outer'), data)
# Clean
out = out.drop('iso2c', axis=1)
out = out.set_index(['country', 'year'])
return out
-def _get_data(indicator = "NY.GNS.ICTR.GN.ZS", country = 'US',
- start = 2002, end = 2005):
+def _get_data(indicator="NY.GNS.ICTR.GN.ZS", country='US',
+ start=2002, end=2005):
# Build URL for api call
url = "http://api.worldbank.org/countries/" + country + "/indicators/" + \
indicator + "?date=" + str(start) + ":" + str(end) + "&per_page=25000" + \
@@ -95,7 +96,7 @@ def _get_data(indicator = "NY.GNS.ICTR.GN.ZS", country = 'US',
# Prepare output
out = pandas.DataFrame([country, iso2c, year, value]).T
return out
-
+
def get_countries():
'''Query information about countries
@@ -109,7 +110,7 @@ def get_countries():
data.incomeLevel = map(lambda x: x['value'], data.incomeLevel)
data.lendingType = map(lambda x: x['value'], data.lendingType)
data.region = map(lambda x: x['value'], data.region)
- data = data.rename(columns={'id':'iso3c', 'iso2Code':'iso2c'})
+ data = data.rename(columns={'id': 'iso3c', 'iso2Code': 'iso2c'})
return data
@@ -126,8 +127,9 @@ def get_indicators():
fun = lambda x: x.encode('ascii', 'ignore')
data.sourceOrganization = data.sourceOrganization.apply(fun)
# Clean topic field
+
def get_value(x):
- try:
+ try:
return x['value']
except:
return ''
@@ -141,6 +143,8 @@ def get_value(x):
_cached_series = None
+
+
def search(string='gdp.*capi', field='name', case=False):
"""
Search available data series from the world bank
@@ -150,7 +154,7 @@ def search(string='gdp.*capi', field='name', case=False):
string: string
regular expression
- field: string
+ field: string
id, name, source, sourceNote, sourceOrganization, topics
See notes below
case: bool
@@ -158,20 +162,20 @@ def search(string='gdp.*capi', field='name', case=False):
Notes
-----
-
+
The first time this function is run it will download and cache the full
list of available series. Depending on the speed of your network
connection, this can take time. Subsequent searches will use the cached
- copy, so they should be much faster.
+ copy, so they should be much faster.
id : Data series indicator (for use with the ``indicator`` argument of
``WDI()``) e.g. NY.GNS.ICTR.GN.ZS"
- name: Short description of the data series
- source: Data collection project
- sourceOrganization: Data collection organization
- note:
- sourceNote:
- topics:
+ name: Short description of the data series
+ source: Data collection project
+ sourceOrganization: Data collection organization
+ note:
+ sourceNote:
+ topics:
"""
# Create cached list of series if it does not exist
global _cached_series
diff --git a/pandas/rpy/base.py b/pandas/rpy/base.py
index 0c80448684697..4cd86d3c3f4e3 100644
--- a/pandas/rpy/base.py
+++ b/pandas/rpy/base.py
@@ -10,4 +10,3 @@ class lm(object):
"""
def __init__(self, formula, data):
pass
-
diff --git a/pandas/rpy/common.py b/pandas/rpy/common.py
index d2a9eaefffd1b..acc562925c925 100644
--- a/pandas/rpy/common.py
+++ b/pandas/rpy/common.py
@@ -219,10 +219,11 @@ def convert_to_r_posixct(obj):
vals = robj.vectors.FloatSexpVector(obj.values.view('i8') / 1E9)
as_posixct = robj.baseenv.get('as.POSIXct')
origin = StrSexpVector([time.strftime("%Y-%m-%d",
- time.gmtime(0)),])
+ time.gmtime(0)), ])
# We will be sending ints as UTC
- tz = obj.tz.zone if hasattr(obj, 'tz') and hasattr(obj.tz, 'zone') else 'UTC'
+ tz = obj.tz.zone if hasattr(
+ obj, 'tz') and hasattr(obj.tz, 'zone') else 'UTC'
tz = StrSexpVector([tz])
utc_tz = StrSexpVector(['UTC'])
@@ -232,14 +233,14 @@ def convert_to_r_posixct(obj):
VECTOR_TYPES = {np.float64: robj.FloatVector,
- np.float32: robj.FloatVector,
- np.float: robj.FloatVector,
- np.int: robj.IntVector,
- np.int32: robj.IntVector,
- np.int64: robj.IntVector,
- np.object_: robj.StrVector,
- np.str: robj.StrVector,
- np.bool: robj.BoolVector}
+ np.float32: robj.FloatVector,
+ np.float: robj.FloatVector,
+ np.int: robj.IntVector,
+ np.int32: robj.IntVector,
+ np.int64: robj.IntVector,
+ np.object_: robj.StrVector,
+ np.str: robj.StrVector,
+ np.bool: robj.BoolVector}
NA_TYPES = {np.float64: robj.NA_Real,
np.float32: robj.NA_Real,
@@ -271,7 +272,7 @@ def convert_to_r_dataframe(df, strings_as_factors=False):
columns = rlc.OrdDict()
- #FIXME: This doesn't handle MultiIndex
+ # FIXME: This doesn't handle MultiIndex
for column in df:
value = df[column]
@@ -379,7 +380,7 @@ def test_convert_r_dataframe():
seriesd = _test.getSeriesData()
frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
- #Null data
+ # Null data
frame["E"] = [np.nan for item in frame["A"]]
# Some mixed type data
frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
@@ -411,7 +412,7 @@ def test_convert_r_matrix():
seriesd = _test.getSeriesData()
frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
- #Null data
+ # Null data
frame["E"] = [np.nan for item in frame["A"]]
r_dataframe = convert_to_r_matrix(frame)
@@ -429,7 +430,7 @@ def test_convert_r_matrix():
# Pandas bug 1282
frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
- #FIXME: Ugly, this whole module needs to be ported to nose/unittest
+ # FIXME: Ugly, this whole module needs to be ported to nose/unittest
try:
wrong_matrix = convert_to_r_matrix(frame)
except TypeError:
diff --git a/pandas/sandbox/qtpandas.py b/pandas/sandbox/qtpandas.py
index 50d183dbed5fd..35aa28fea1678 100644
--- a/pandas/sandbox/qtpandas.py
+++ b/pandas/sandbox/qtpandas.py
@@ -3,20 +3,21 @@
@author: Jev Kuznetsov
'''
-from PyQt4.QtCore import (QAbstractTableModel,Qt,QVariant,QModelIndex, SIGNAL)
-from PyQt4.QtGui import (QApplication,QDialog,QVBoxLayout, QTableView, QWidget)
+from PyQt4.QtCore import (
+ QAbstractTableModel, Qt, QVariant, QModelIndex, SIGNAL)
+from PyQt4.QtGui import (
+ QApplication, QDialog, QVBoxLayout, QTableView, QWidget)
from pandas import DataFrame, Index
-
class DataFrameModel(QAbstractTableModel):
''' data model for a DataFrame class '''
def __init__(self):
- super(DataFrameModel,self).__init__()
+ super(DataFrameModel, self).__init__()
self.df = DataFrame()
- def setDataFrame(self,dataFrame):
+ def setDataFrame(self, dataFrame):
self.df = dataFrame
def signalUpdate(self):
@@ -25,7 +26,7 @@ def signalUpdate(self):
self.layoutChanged.emit()
#------------- table display functions -----------------
- def headerData(self,section,orientation,role=Qt.DisplayRole):
+ def headerData(self, section, orientation, role=Qt.DisplayRole):
if role != Qt.DisplayRole:
return QVariant()
@@ -36,7 +37,7 @@ def headerData(self,section,orientation,role=Qt.DisplayRole):
return QVariant()
elif orientation == Qt.Vertical:
try:
- #return self.df.index.tolist()
+ # return self.df.index.tolist()
return self.df.index.tolist()[section]
except (IndexError, ):
return QVariant()
@@ -48,7 +49,7 @@ def data(self, index, role=Qt.DisplayRole):
if not index.isValid():
return QVariant()
- return QVariant(str(self.df.ix[index.row(),index.column()]))
+ return QVariant(str(self.df.ix[index.row(), index.column()]))
def flags(self, index):
flags = super(DataFrameModel, self).flags(index)
@@ -59,7 +60,7 @@ def setData(self, index, value, role):
self.df.set_value(self.df.index[index.row()],
self.df.columns[index.column()],
value.toPyObject())
- return True
+ return True
def rowCount(self, index=QModelIndex()):
return self.df.shape[0]
@@ -70,8 +71,8 @@ def columnCount(self, index=QModelIndex()):
class DataFrameWidget(QWidget):
''' a simple widget for using DataFrames in a gui '''
- def __init__(self,dataFrame, parent=None):
- super(DataFrameWidget,self).__init__(parent)
+ def __init__(self, dataFrame, parent=None):
+ super(DataFrameWidget, self).__init__(parent)
self.dataModel = DataFrameModel()
self.dataModel.setDataFrame(dataFrame)
@@ -84,26 +85,25 @@ def __init__(self,dataFrame, parent=None):
layout.addWidget(self.dataTable)
self.setLayout(layout)
-
-
def resizeColumnsToContents(self):
self.dataTable.resizeColumnsToContents()
#-----------------stand alone test code
+
def testDf():
''' creates test dataframe '''
- data = {'int':[1,2,3], 'float':[1.5,2.5,3.5],
- 'string':['a','b','c'], 'nan':[np.nan,np.nan,np.nan]}
- return DataFrame(data, index=Index(['AAA','BBB','CCC']),
- columns=['int','float','string','nan'])
+ data = {'int': [1, 2, 3], 'float': [1.5, 2.5, 3.5],
+ 'string': ['a', 'b', 'c'], 'nan': [np.nan, np.nan, np.nan]}
+ return DataFrame(data, index=Index(['AAA', 'BBB', 'CCC']),
+ columns=['int', 'float', 'string', 'nan'])
class Form(QDialog):
- def __init__(self,parent=None):
- super(Form,self).__init__(parent)
+ def __init__(self, parent=None):
+ super(Form, self).__init__(parent)
- df = testDf() # make up some data
+ df = testDf() # make up some data
widget = DataFrameWidget(df)
widget.resizeColumnsToContents()
@@ -111,7 +111,7 @@ def __init__(self,parent=None):
layout.addWidget(widget)
self.setLayout(layout)
-if __name__=='__main__':
+if __name__ == '__main__':
import sys
import numpy as np
@@ -119,9 +119,3 @@ def __init__(self,parent=None):
form = Form()
form.show()
app.exec_()
-
-
-
-
-
-
diff --git a/pandas/sandbox/stats/rls.py b/pandas/sandbox/stats/rls.py
index b873225ccc715..51166500c484f 100644
--- a/pandas/sandbox/stats/rls.py
+++ b/pandas/sandbox/stats/rls.py
@@ -3,6 +3,7 @@
import numpy as np
from scikits.statsmodels.regression import WLS, GLS, RegressionResults
+
class RLS(GLS):
"""
Restricted general least squares model that handles linear constraints
@@ -57,10 +58,12 @@ def __init__(self, endog, exog, constr, param=0., sigma=None):
self.cholsigmainv = np.diag(np.sqrt(sigma))
else:
self.sigma = sigma
- self.cholsigmainv = np.linalg.cholesky(np.linalg.pinv(self.sigma)).T
+ self.cholsigmainv = np.linalg.cholesky(
+ np.linalg.pinv(self.sigma)).T
super(GLS, self).__init__(endog, exog)
_rwexog = None
+
@property
def rwexog(self):
"""Whitened exogenous variables augmented with restrictions"""
@@ -68,15 +71,16 @@ def rwexog(self):
P = self.ncoeffs
K = self.nconstraint
design = np.zeros((P + K, P + K))
- design[:P, :P] = np.dot(self.wexog.T, self.wexog) #top left
+ design[:P, :P] = np.dot(self.wexog.T, self.wexog) # top left
constr = np.reshape(self.constraint, (K, P))
- design[:P, P:] = constr.T #top right partition
- design[P:, :P] = constr #bottom left partition
- design[P:, P:] = np.zeros((K, K)) #bottom right partition
+ design[:P, P:] = constr.T # top right partition
+ design[P:, :P] = constr # bottom left partition
+ design[P:, P:] = np.zeros((K, K)) # bottom right partition
self._rwexog = design
return self._rwexog
_inv_rwexog = None
+
@property
def inv_rwexog(self):
"""Inverse of self.rwexog"""
@@ -85,6 +89,7 @@ def inv_rwexog(self):
return self._inv_rwexog
_rwendog = None
+
@property
def rwendog(self):
"""Whitened endogenous variable augmented with restriction parameters"""
@@ -98,6 +103,7 @@ def rwendog(self):
return self._rwendog
_ncp = None
+
@property
def rnorm_cov_params(self):
"""Parameter covariance under restrictions"""
@@ -107,6 +113,7 @@ def rnorm_cov_params(self):
return self._ncp
_wncp = None
+
@property
def wrnorm_cov_params(self):
"""
@@ -123,6 +130,7 @@ def wrnorm_cov_params(self):
return self._wncp
_coeffs = None
+
@property
def coeffs(self):
"""Estimated parameters"""
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 5a612498967e4..7d1327955caa4 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -119,7 +119,7 @@ def __new__(cls, data, index=None, sparse_index=None, kind='block',
length = len(index)
if data == fill_value or (isnull(data)
- and isnull(fill_value)):
+ and isnull(fill_value)):
if kind == 'block':
sparse_index = BlockIndex(length, [], [])
else:
@@ -160,11 +160,11 @@ def _make_time_series(self):
self.__class__ = SparseTimeSeries
@classmethod
- def from_array(cls, arr, index=None, name=None, copy=False,fill_value=None):
+ def from_array(cls, arr, index=None, name=None, copy=False, fill_value=None):
"""
Simplified alternate constructor
"""
- return SparseSeries(arr, index=index, name=name, copy=copy,fill_value=fill_value)
+ return SparseSeries(arr, index=index, name=name, copy=copy, fill_value=fill_value)
def __init__(self, data, index=None, sparse_index=None, kind='block',
fill_value=None, name=None, copy=False):
diff --git a/pandas/sparse/tests/test_array.py b/pandas/sparse/tests/test_array.py
index 48bd492574f75..cf2cd2f687e8d 100644
--- a/pandas/sparse/tests/test_array.py
+++ b/pandas/sparse/tests/test_array.py
@@ -10,6 +10,7 @@
from pandas.sparse.api import SparseArray
from pandas.util.testing import assert_almost_equal
+
def assert_sp_array_equal(left, right):
assert_almost_equal(left.sp_values, right.sp_values)
assert(left.sp_index.equals(right.sp_index))
@@ -21,6 +22,7 @@ def assert_sp_array_equal(left, right):
class TestSparseArray(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.arr_data = np.array([nan, nan, 1, 2, 3, nan, 4, 5, nan, 6])
self.arr = SparseArray(self.arr_data)
@@ -150,5 +152,5 @@ def _check_roundtrip(obj):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/sparse/tests/test_libsparse.py b/pandas/sparse/tests/test_libsparse.py
index 2ff417537c3fa..d31f919e2e84b 100644
--- a/pandas/sparse/tests/test_libsparse.py
+++ b/pandas/sparse/tests/test_libsparse.py
@@ -16,37 +16,38 @@
TEST_LENGTH = 20
-plain_case = dict(xloc = [0, 7, 15],
- xlen = [3, 5, 5],
- yloc = [2, 9, 14],
- ylen = [2, 3, 5],
- intersect_loc = [2, 9, 15],
- intersect_len = [1, 3, 4])
-delete_blocks = dict(xloc = [0, 5],
- xlen = [4, 4],
- yloc = [1],
- ylen = [4],
- intersect_loc = [1],
- intersect_len = [3])
-split_blocks = dict(xloc = [0],
- xlen = [10],
- yloc = [0, 5],
- ylen = [3, 7],
- intersect_loc = [0, 5],
- intersect_len = [3, 5])
-skip_block = dict(xloc = [10],
- xlen = [5],
- yloc = [0, 12],
- ylen = [5, 3],
- intersect_loc = [12],
- intersect_len = [3])
-
-no_intersect = dict(xloc = [0, 10],
- xlen = [4, 6],
- yloc = [5, 17],
- ylen = [4, 2],
- intersect_loc = [],
- intersect_len = [])
+plain_case = dict(xloc=[0, 7, 15],
+ xlen=[3, 5, 5],
+ yloc=[2, 9, 14],
+ ylen=[2, 3, 5],
+ intersect_loc=[2, 9, 15],
+ intersect_len=[1, 3, 4])
+delete_blocks = dict(xloc=[0, 5],
+ xlen=[4, 4],
+ yloc=[1],
+ ylen=[4],
+ intersect_loc=[1],
+ intersect_len=[3])
+split_blocks = dict(xloc=[0],
+ xlen=[10],
+ yloc=[0, 5],
+ ylen=[3, 7],
+ intersect_loc=[0, 5],
+ intersect_len=[3, 5])
+skip_block = dict(xloc=[10],
+ xlen=[5],
+ yloc=[0, 12],
+ ylen=[5, 3],
+ intersect_loc=[12],
+ intersect_len=[3])
+
+no_intersect = dict(xloc=[0, 10],
+ xlen=[4, 6],
+ yloc=[5, 17],
+ ylen=[4, 2],
+ intersect_loc=[],
+ intersect_len=[])
+
def check_cases(_check_case):
def _check_case_dict(case):
@@ -63,6 +64,7 @@ def _check_case_dict(case):
_check_case([0], [5], [], [], [], [])
_check_case([], [], [], [], [], [])
+
def test_index_make_union():
def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
xindex = BlockIndex(TEST_LENGTH, xloc, xlen)
@@ -83,18 +85,24 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
y: ----
r: --------
"""
- xloc = [0]; xlen = [5]
- yloc = [5]; ylen = [4]
- eloc = [0]; elen = [9]
+ xloc = [0]
+ xlen = [5]
+ yloc = [5]
+ ylen = [4]
+ eloc = [0]
+ elen = [9]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
"""
x: ----- -----
y: ----- --
"""
- xloc = [0, 10]; xlen = [5, 5]
- yloc = [2, 17]; ylen = [5, 2]
- eloc = [0, 10, 17]; elen = [7, 5, 2]
+ xloc = [0, 10]
+ xlen = [5, 5]
+ yloc = [2, 17]
+ ylen = [5, 2]
+ eloc = [0, 10, 17]
+ elen = [7, 5, 2]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
"""
@@ -102,9 +110,12 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
y: -------
r: ----------
"""
- xloc = [1]; xlen = [5]
- yloc = [3]; ylen = [5]
- eloc = [1]; elen = [7]
+ xloc = [1]
+ xlen = [5]
+ yloc = [3]
+ ylen = [5]
+ eloc = [1]
+ elen = [7]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
"""
@@ -112,9 +123,12 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
y: -------
r: -------------
"""
- xloc = [2, 10]; xlen = [4, 4]
- yloc = [4]; ylen = [8]
- eloc = [2]; elen = [12]
+ xloc = [2, 10]
+ xlen = [4, 4]
+ yloc = [4]
+ ylen = [8]
+ eloc = [2]
+ elen = [12]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
"""
@@ -122,9 +136,12 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
y: -------
r: -------------
"""
- xloc = [0, 5]; xlen = [3, 5]
- yloc = [0]; ylen = [7]
- eloc = [0]; elen = [10]
+ xloc = [0, 5]
+ xlen = [3, 5]
+ yloc = [0]
+ ylen = [7]
+ eloc = [0]
+ elen = [10]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
"""
@@ -132,9 +149,12 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
y: ------- ---
r: -------------
"""
- xloc = [2, 10]; xlen = [4, 4]
- yloc = [4, 13]; ylen = [8, 4]
- eloc = [2]; elen = [15]
+ xloc = [2, 10]
+ xlen = [4, 4]
+ yloc = [4, 13]
+ ylen = [8, 4]
+ eloc = [2]
+ elen = [15]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
"""
@@ -142,22 +162,29 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
y: ---- ---- ---
r: ----------------------
"""
- xloc = [2]; xlen = [15]
- yloc = [4, 9, 14]; ylen = [3, 2, 2]
- eloc = [2]; elen = [15]
+ xloc = [2]
+ xlen = [15]
+ yloc = [4, 9, 14]
+ ylen = [3, 2, 2]
+ eloc = [2]
+ elen = [15]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
"""
x: ---- ---
y: --- ---
"""
- xloc = [0, 10]; xlen = [3, 3]
- yloc = [5, 15]; ylen = [2, 2]
- eloc = [0, 5, 10, 15]; elen = [3, 2, 3, 2]
+ xloc = [0, 10]
+ xlen = [3, 3]
+ yloc = [5, 15]
+ ylen = [2, 2]
+ eloc = [0, 5, 10, 15]
+ elen = [3, 2, 3, 2]
_check_case(xloc, xlen, yloc, ylen, eloc, elen)
# TODO: different-length index objects
+
def test_lookup():
def _check(index):
@@ -180,6 +207,7 @@ def _check(index):
# corner cases
+
def test_intersect():
def _check_correct(a, b, expected):
result = a.intersect(b)
@@ -205,6 +233,7 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
check_cases(_check_case)
+
class TestBlockIndex(TestCase):
def test_equals(self):
@@ -243,6 +272,7 @@ def test_to_block_index(self):
index = BlockIndex(10, [0, 5], [4, 5])
self.assert_(index.to_block_index() is index)
+
class TestIntIndex(TestCase):
def test_equals(self):
@@ -267,6 +297,7 @@ def test_to_int_index(self):
index = IntIndex(10, [2, 3, 4, 5, 6])
self.assert_(index.to_int_index() is index)
+
class TestSparseOperators(TestCase):
def _nan_op_tests(self, sparse_op, python_op):
@@ -309,7 +340,8 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
xfill = 0
yfill = 2
- result_block_vals, rb_index = sparse_op(x, xindex, xfill, y, yindex, yfill)
+ result_block_vals, rb_index = sparse_op(
+ x, xindex, xfill, y, yindex, yfill)
result_int_vals, ri_index = sparse_op(x, xdindex, xfill,
y, ydindex, yfill)
@@ -334,6 +366,8 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
# too cute? oh but how I abhor code duplication
check_ops = ['add', 'sub', 'mul', 'truediv', 'floordiv']
+
+
def make_nanoptestf(op):
def f(self):
sparse_op = getattr(splib, 'sparse_nan%s' % op)
@@ -342,6 +376,7 @@ def f(self):
f.__name__ = 'test_nan%s' % op
return f
+
def make_optestf(op):
def f(self):
sparse_op = getattr(splib, 'sparse_%s' % op)
@@ -360,6 +395,5 @@ def f(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/sparse/tests/test_list.py b/pandas/sparse/tests/test_list.py
index 18a0a55c2db2a..a69385dd9a436 100644
--- a/pandas/sparse/tests/test_list.py
+++ b/pandas/sparse/tests/test_list.py
@@ -12,6 +12,7 @@
def assert_sp_list_equal(left, right):
assert_sp_array_equal(left.to_array(), right.to_array())
+
class TestSparseList(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -101,5 +102,5 @@ def test_getitem(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py
index eeadcbea08466..d8ec567b2bca2 100644
--- a/pandas/sparse/tests/test_sparse.py
+++ b/pandas/sparse/tests/test_sparse.py
@@ -37,6 +37,7 @@
from test_array import assert_sp_array_equal
+
def _test_data1():
# nan-based
arr = np.arange(20, dtype=float)
@@ -47,6 +48,7 @@ def _test_data1():
return arr, index
+
def _test_data2():
# nan-based
arr = np.arange(15, dtype=float)
@@ -55,18 +57,21 @@ def _test_data2():
arr[-1:] = nan
return arr, index
+
def _test_data1_zero():
# zero-based
arr, index = _test_data1()
arr[np.isnan(arr)] = 0
return arr, index
+
def _test_data2_zero():
# zero-based
arr, index = _test_data2()
arr[np.isnan(arr)] = 0
return arr, index
+
def assert_sp_series_equal(a, b):
assert(a.index.equals(b.index))
assert_sp_array_equal(a, b)
@@ -95,6 +100,7 @@ def assert_sp_frame_equal(left, right, exact_indices=True):
for col in right:
assert(col in left)
+
def assert_sp_panel_equal(left, right, exact_indices=True):
for item, frame in left.iterkv():
assert(item in right)
@@ -108,9 +114,11 @@ def assert_sp_panel_equal(left, right, exact_indices=True):
for item in right:
assert(item in left)
+
class TestSparseSeries(TestCase,
test_series.CheckNameIntegration):
_multiprocess_can_split_ = True
+
def setUp(self):
arr, index = _test_data1()
@@ -143,7 +151,7 @@ def setUp(self):
def test_construct_DataFrame_with_sp_series(self):
# it works!
- df = DataFrame({'col' : self.bseries})
+ df = DataFrame({'col': self.bseries})
def test_sparse_to_dense(self):
arr, index = _test_data1()
@@ -558,6 +566,7 @@ def _compare_with_dense(obj, op):
self.assertEquals(sparse_result, dense_result)
to_compare = ['count', 'sum', 'mean', 'std', 'var', 'skew']
+
def _compare_all(obj):
for op in to_compare:
_compare_with_dense(obj, op)
@@ -608,19 +617,19 @@ def _check_matches(indices, expected):
assert(v.sp_index.equals(expected))
indices1 = [BlockIndex(10, [2], [7]),
- BlockIndex(10, [1, 6], [3, 4]),
- BlockIndex(10, [0], [10])]
+ BlockIndex(10, [1, 6], [3, 4]),
+ BlockIndex(10, [0], [10])]
expected1 = BlockIndex(10, [2, 6], [2, 3])
_check_matches(indices1, expected1)
indices2 = [BlockIndex(10, [2], [7]),
- BlockIndex(10, [2], [7])]
+ BlockIndex(10, [2], [7])]
expected2 = indices2[0]
_check_matches(indices2, expected2)
# must have NaN fill value
- data = {'a' : SparseSeries(np.arange(7), sparse_index=expected2,
- fill_value=0)}
+ data = {'a': SparseSeries(np.arange(7), sparse_index=expected2,
+ fill_value=0)}
nose.tools.assert_raises(Exception, spf.homogenize, data)
def test_fill_value_corner(self):
@@ -681,17 +690,20 @@ def test_combine_first(self):
assert_sp_series_equal(result, result2)
assert_sp_series_equal(result, expected)
+
class TestSparseTimeSeries(TestCase):
pass
+
class TestSparseDataFrame(TestCase, test_frame.SafeForSparse):
klass = SparseDataFrame
_multiprocess_can_split_ = True
+
def setUp(self):
- self.data = {'A' : [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
- 'B' : [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
- 'C' : np.arange(10),
- 'D' : [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]}
+ self.data = {'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
+ 'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
+ 'C': np.arange(10),
+ 'D': [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]}
self.dates = bdate_range('1/1/2011', periods=10)
@@ -821,8 +833,8 @@ def _test_roundtrip(frame):
self._check_all(_test_roundtrip)
def test_dense_to_sparse(self):
- df = DataFrame({'A' : [nan, nan, nan, 1, 2],
- 'B' : [1, 2, nan, nan, nan]})
+ df = DataFrame({'A': [nan, nan, nan, 1, 2],
+ 'B': [1, 2, nan, nan, nan]})
sdf = df.to_sparse()
self.assert_(isinstance(sdf, SparseDataFrame))
self.assert_(np.isnan(sdf.default_fill_value))
@@ -832,8 +844,8 @@ def test_dense_to_sparse(self):
sdf = df.to_sparse(kind='integer')
self.assert_(isinstance(sdf['A'].sp_index, IntIndex))
- df = DataFrame({'A' : [0, 0, 0, 1, 2],
- 'B' : [1, 2, 0, 0, 0]}, dtype=float)
+ df = DataFrame({'A': [0, 0, 0, 1, 2],
+ 'B': [1, 2, 0, 0, 0]}, dtype=float)
sdf = df.to_sparse(fill_value=0)
self.assertEquals(sdf.default_fill_value, 0)
tm.assert_frame_equal(sdf.to_dense(), df)
@@ -957,7 +969,7 @@ def test_scalar_ops(self):
def test_getitem(self):
# #1585 select multiple columns
- sdf = SparseDataFrame(index=[0, 1, 2], columns=['a', 'b','c'])
+ sdf = SparseDataFrame(index=[0, 1, 2], columns=['a', 'b', 'c'])
result = sdf[['a', 'b']]
exp = sdf.reindex(columns=['a', 'b'])
@@ -972,7 +984,7 @@ def test_icol(self):
assert_sp_series_equal(result, self.frame['A'])
# preserve sparse index type. #2251
- data = {'A' : [0,1 ]}
+ data = {'A': [0, 1]}
iframe = SparseDataFrame(data, default_kind='integer')
self.assertEquals(type(iframe['A'].sp_index),
type(iframe.icol(0).sp_index))
@@ -1062,7 +1074,6 @@ def _check_frame(frame):
frame['K'] = frame.default_fill_value
self.assertEquals(len(frame['K'].sp_values), 0)
-
self._check_all(_check_frame)
def test_setitem_corner(self):
@@ -1134,17 +1145,18 @@ def test_apply(self):
self.assert_(self.empty.apply(np.sqrt) is self.empty)
def test_apply_nonuq(self):
- df_orig = DataFrame([[1,2,3], [4,5,6], [7,8,9]], index=['a','a','c'])
+ df_orig = DataFrame(
+ [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c'])
df = df_orig.to_sparse()
rs = df.apply(lambda s: s[0], axis=1)
xp = Series([1., 4., 7.], ['a', 'a', 'c'])
assert_series_equal(rs, xp)
- #df.T breaks
+ # df.T breaks
df = df_orig.T.to_sparse()
rs = df.apply(lambda s: s[0], axis=0)
- #no non-unique columns supported in sparse yet
- #assert_series_equal(rs, xp)
+ # no non-unique columns supported in sparse yet
+ # assert_series_equal(rs, xp)
def test_applymap(self):
# just test that it works
@@ -1261,10 +1273,10 @@ def test_take(self):
assert_sp_frame_equal(result, expected)
def test_density(self):
- df = SparseDataFrame({'A' : [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
- 'B' : [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
- 'C' : np.arange(10),
- 'D' : [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]})
+ df = SparseDataFrame({'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
+ 'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
+ 'C': np.arange(10),
+ 'D': [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]})
self.assertEquals(df.density, 0.75)
@@ -1279,7 +1291,7 @@ def test_stack_sparse_frame(self):
def _check(frame):
dense_frame = frame.to_dense()
- wp = Panel.from_dict({'foo' : frame})
+ wp = Panel.from_dict({'foo': frame})
from_dense_lp = wp.to_frame()
from_sparse_lp = spf.stack_sparse_frame(frame)
@@ -1287,7 +1299,6 @@ def _check(frame):
self.assert_(np.array_equal(from_dense_lp.values,
from_sparse_lp.values))
-
_check(self.frame)
_check(self.iframe)
@@ -1375,15 +1386,15 @@ def test_isin(self):
def test_sparse_pow_issue(self):
# #2220
- df = SparseDataFrame({'A' : [1.1,3.3],'B' : [2.5,-3.9]})
+ df = SparseDataFrame({'A': [1.1, 3.3], 'B': [2.5, -3.9]})
# note : no error without nan
- df = SparseDataFrame({'A' : [nan, 0, 1] })
+ df = SparseDataFrame({'A': [nan, 0, 1]})
# note that 2 ** df works fine, also df ** 1
result = 1 ** df
- r1 = result.take([0],1)['A']
+ r1 = result.take([0], 1)['A']
r2 = result['A']
self.assertEqual(len(r2.sp_values), len(r1.sp_values))
@@ -1395,58 +1406,62 @@ def _dense_series_compare(s, f):
dense_result = f(s.to_dense())
assert_series_equal(result.to_dense(), dense_result)
+
def _dense_frame_compare(frame, f):
result = f(frame)
assert(isinstance(frame, SparseDataFrame))
dense_result = f(frame.to_dense())
assert_frame_equal(result.to_dense(), dense_result)
+
def panel_data1():
index = bdate_range('1/1/2011', periods=8)
return DataFrame({
- 'A' : [nan, nan, nan, 0, 1, 2, 3, 4],
- 'B' : [0, 1, 2, 3, 4, nan, nan, nan],
- 'C' : [0, 1, 2, nan, nan, nan, 3, 4],
- 'D' : [nan, 0, 1, nan, 2, 3, 4, nan]
- }, index=index)
+ 'A': [nan, nan, nan, 0, 1, 2, 3, 4],
+ 'B': [0, 1, 2, 3, 4, nan, nan, nan],
+ 'C': [0, 1, 2, nan, nan, nan, 3, 4],
+ 'D': [nan, 0, 1, nan, 2, 3, 4, nan]
+ }, index=index)
def panel_data2():
index = bdate_range('1/1/2011', periods=9)
return DataFrame({
- 'A' : [nan, nan, nan, 0, 1, 2, 3, 4, 5],
- 'B' : [0, 1, 2, 3, 4, 5, nan, nan, nan],
- 'C' : [0, 1, 2, nan, nan, nan, 3, 4, 5],
- 'D' : [nan, 0, 1, nan, 2, 3, 4, 5, nan]
- }, index=index)
+ 'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5],
+ 'B': [0, 1, 2, 3, 4, 5, nan, nan, nan],
+ 'C': [0, 1, 2, nan, nan, nan, 3, 4, 5],
+ 'D': [nan, 0, 1, nan, 2, 3, 4, 5, nan]
+ }, index=index)
def panel_data3():
index = bdate_range('1/1/2011', periods=10).shift(-2)
return DataFrame({
- 'A' : [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
- 'B' : [0, 1, 2, 3, 4, 5, 6, nan, nan, nan],
- 'C' : [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
- 'D' : [nan, 0, 1, nan, 2, 3, 4, 5, 6, nan]
- }, index=index)
+ 'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
+ 'B': [0, 1, 2, 3, 4, 5, 6, nan, nan, nan],
+ 'C': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
+ 'D': [nan, 0, 1, nan, 2, 3, 4, 5, 6, nan]
+ }, index=index)
+
class TestSparsePanel(TestCase,
test_panel.SafeForLongAndSparse,
test_panel.SafeForSparse):
_multiprocess_can_split_ = True
+
@classmethod
def assert_panel_equal(cls, x, y):
assert_sp_panel_equal(x, y)
def setUp(self):
self.data_dict = {
- 'ItemA' : panel_data1(),
- 'ItemB' : panel_data2(),
- 'ItemC' : panel_data3(),
- 'ItemD' : panel_data1(),
+ 'ItemA': panel_data1(),
+ 'ItemB': panel_data2(),
+ 'ItemC': panel_data3(),
+ 'ItemD': panel_data1(),
}
self.panel = SparsePanel(self.data_dict)
@@ -1628,7 +1643,7 @@ def _dense_comp(sparse):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
# nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure',
diff --git a/pandas/stats/fama_macbeth.py b/pandas/stats/fama_macbeth.py
index 2c8a3a65bd5ac..b75029c615735 100644
--- a/pandas/stats/fama_macbeth.py
+++ b/pandas/stats/fama_macbeth.py
@@ -90,7 +90,7 @@ def _results(self):
def _coef_table(self):
buffer = StringIO()
buffer.write('%13s %13s %13s %13s %13s %13s\n' %
- ('Variable', 'Beta', 'Std Err', 't-stat', 'CI 2.5%', 'CI 97.5%'))
+ ('Variable', 'Beta', 'Std Err', 't-stat', 'CI 2.5%', 'CI 97.5%'))
template = '%13s %13.4f %13.4f %13.2f %13.4f %13.4f\n'
for i, name in enumerate(self._cols):
@@ -178,7 +178,7 @@ def _calc_stats(self):
else:
begin = 0
- B = betas[max(obs_total[begin] - 1, 0) : obs_total[i]]
+ B = betas[max(obs_total[begin] - 1, 0): obs_total[i]]
mean_beta, std_beta, t_stat = _calc_t_stat(B, self._nw_lags_beta)
mean_betas.append(mean_beta)
std_betas.append(std_beta)
diff --git a/pandas/stats/interface.py b/pandas/stats/interface.py
index ff87aa1c9af26..d93eb83820822 100644
--- a/pandas/stats/interface.py
+++ b/pandas/stats/interface.py
@@ -117,7 +117,7 @@ def ols(**kwargs):
del kwargs[rolling_field]
if panel:
- if pool == False:
+ if pool is False:
klass = NonPooledPanelOLS
else:
klass = PanelOLS
@@ -125,7 +125,7 @@ def ols(**kwargs):
klass = OLS
else:
if panel:
- if pool == False:
+ if pool is False:
klass = NonPooledPanelOLS
else:
klass = MovingPanelOLS
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index b4e367ac1598b..7f44af28314b3 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -149,7 +149,7 @@ def rolling_count(arg, window, freq=None, center=False, time_rule=None):
converted = np.isfinite(values).astype(float)
result = rolling_sum(converted, window, min_periods=1,
- center=center) # already converted
+ center=center) # already converted
# putmask here?
result[np.isnan(result)] = 0
@@ -164,6 +164,7 @@ def rolling_cov(arg1, arg2, window, min_periods=None, freq=None,
arg1 = _conv_timerule(arg1, freq, time_rule)
arg2 = _conv_timerule(arg2, freq, time_rule)
window = min(window, len(arg1), len(arg2))
+
def _get_cov(X, Y):
mean = lambda x: rolling_mean(x, window, min_periods)
count = rolling_count(X + Y, window)
@@ -179,6 +180,7 @@ def _get_cov(X, Y):
rs[-offset:] = np.nan
return rs
+
@Substitution("Moving sample correlation", _binary_arg_flex, _flex_retval)
@Appender(_doc_template)
def rolling_corr(arg1, arg2, window, min_periods=None, freq=None,
@@ -188,8 +190,8 @@ def _get_corr(a, b):
center=center, time_rule=time_rule)
den = (rolling_std(a, window, min_periods, freq=freq,
center=center, time_rule=time_rule) *
- rolling_std(b, window, min_periods, freq=freq,
- center=center, time_rule=time_rule))
+ rolling_std(b, window, min_periods, freq=freq,
+ center=center, time_rule=time_rule))
return num / den
return _flex_binary_moment(arg1, arg2, _get_corr)
@@ -288,6 +290,7 @@ def _rolling_moment(arg, window, func, minp, axis=0, freq=None,
rs = _center_window(rs, window, axis)
return rs
+
def _center_window(rs, window, axis):
offset = int((window - 1) / 2.)
if isinstance(rs, (Series, DataFrame, Panel)):
@@ -306,6 +309,7 @@ def _center_window(rs, window, axis):
rs[tuple(na_indexer)] = np.nan
return rs
+
def _process_data_structure(arg, kill_inf=True):
if isinstance(arg, DataFrame):
return_hook = lambda v: type(arg)(v, index=arg.index,
@@ -355,7 +359,7 @@ def ewma(arg, com=None, span=None, min_periods=0, freq=None, time_rule=None,
def _ewma(v):
result = algos.ewma(v, com, int(adjust))
first_index = _first_valid_index(v)
- result[first_index : first_index + min_periods] = NaN
+ result[first_index: first_index + min_periods] = NaN
return result
return_hook, values = _process_data_structure(arg)
@@ -461,7 +465,7 @@ def _conv_timerule(arg, freq, time_rule):
if time_rule is not None:
import warnings
warnings.warn("time_rule argument is deprecated, replace with freq",
- FutureWarning)
+ FutureWarning)
freq = time_rule
@@ -576,6 +580,7 @@ def call_cython(arg, window, minp):
return _rolling_moment(arg, window, call_cython, min_periods,
freq=freq, center=center, time_rule=time_rule)
+
def rolling_window(arg, window=None, win_type=None, min_periods=None,
freq=None, center=False, mean=True, time_rule=None,
axis=0, **kwargs):
@@ -628,14 +633,14 @@ def rolling_window(arg, window=None, win_type=None, min_periods=None,
raise ValueError(('Do not specify window type if using custom '
'weights'))
window = com._asarray_tuplesafe(window).astype(float)
- elif com.is_integer(window): #window size
+ elif com.is_integer(window): # window size
if win_type is None:
raise ValueError('Must specify window type')
try:
import scipy.signal as sig
except ImportError:
raise ImportError('Please install scipy to generate window weight')
- win_type = _validate_win_type(win_type, kwargs) # may pop from kwargs
+ win_type = _validate_win_type(win_type, kwargs) # may pop from kwargs
window = sig.get_window(win_type, window).astype(float)
else:
raise ValueError('Invalid window %s' % str(window))
@@ -653,17 +658,19 @@ def rolling_window(arg, window=None, win_type=None, min_periods=None,
rs = _center_window(rs, len(window), axis)
return rs
+
def _validate_win_type(win_type, kwargs):
# may pop from kwargs
- arg_map = {'kaiser' : ['beta'],
- 'gaussian' : ['std'],
- 'general_gaussian' : ['power', 'width'],
- 'slepian' : ['width']}
+ arg_map = {'kaiser': ['beta'],
+ 'gaussian': ['std'],
+ 'general_gaussian': ['power', 'width'],
+ 'slepian': ['width']}
if win_type in arg_map:
return tuple([win_type] +
_pop_args(win_type, arg_map[win_type], kwargs))
return win_type
+
def _pop_args(win_type, arg_names, kwargs):
msg = '%s window requires %%s' % win_type
all_args = []
@@ -695,17 +702,20 @@ def call_cython(arg, window, minp, **kwds):
expanding_min = _expanding_func(algos.roll_min2, 'Expanding minimum')
expanding_sum = _expanding_func(algos.roll_sum, 'Expanding sum')
expanding_mean = _expanding_func(algos.roll_mean, 'Expanding mean')
-expanding_median = _expanding_func(algos.roll_median_cython, 'Expanding median')
+expanding_median = _expanding_func(
+ algos.roll_median_cython, 'Expanding median')
expanding_std = _expanding_func(_ts_std,
'Unbiased expanding standard deviation',
check_minp=_require_min_periods(2))
expanding_var = _expanding_func(algos.roll_var, 'Unbiased expanding variance',
- check_minp=_require_min_periods(2))
-expanding_skew = _expanding_func(algos.roll_skew, 'Unbiased expanding skewness',
- check_minp=_require_min_periods(3))
-expanding_kurt = _expanding_func(algos.roll_kurt, 'Unbiased expanding kurtosis',
- check_minp=_require_min_periods(4))
+ check_minp=_require_min_periods(2))
+expanding_skew = _expanding_func(
+ algos.roll_skew, 'Unbiased expanding skewness',
+ check_minp=_require_min_periods(3))
+expanding_kurt = _expanding_func(
+ algos.roll_kurt, 'Unbiased expanding kurtosis',
+ check_minp=_require_min_periods(4))
def expanding_count(arg, freq=None, center=False, time_rule=None):
diff --git a/pandas/stats/ols.py b/pandas/stats/ols.py
index d19898990022d..9ecf5c6ab715f 100644
--- a/pandas/stats/ols.py
+++ b/pandas/stats/ols.py
@@ -181,8 +181,8 @@ def _f_stat_raw(self):
try:
intercept = cols.get_loc('intercept')
- R = np.concatenate((R[0 : intercept], R[intercept + 1:]))
- r = np.concatenate((r[0 : intercept], r[intercept + 1:]))
+ R = np.concatenate((R[0: intercept], R[intercept + 1:]))
+ r = np.concatenate((r[0: intercept], r[intercept + 1:]))
except KeyError:
# no intercept
pass
@@ -485,7 +485,7 @@ def _coef_table(self):
p_value = results['p_value'][name]
line = coef_template % (name,
- beta[name], std_err, t_stat, p_value, CI1, CI2)
+ beta[name], std_err, t_stat, p_value, CI1, CI2)
buf.write(line)
@@ -941,8 +941,8 @@ def get_result_simple(Fst, d):
try:
intercept = items.get_loc('intercept')
- R = np.concatenate((R[0 : intercept], R[intercept + 1:]))
- r = np.concatenate((r[0 : intercept], r[intercept + 1:]))
+ R = np.concatenate((R[0: intercept], R[intercept + 1:]))
+ r = np.concatenate((r[0: intercept], r[intercept + 1:]))
except KeyError:
# no intercept
pass
@@ -1313,6 +1313,8 @@ def _combine_rhs(rhs):
# A little kludge so we can use this method for both
# MovingOLS and MovingPanelOLS
+
+
def _y_converter(y):
y = y.values.squeeze()
if y.ndim == 0: # pragma: no cover
diff --git a/pandas/stats/plm.py b/pandas/stats/plm.py
index 7dde37822c02b..3173e05ae8e9d 100644
--- a/pandas/stats/plm.py
+++ b/pandas/stats/plm.py
@@ -182,7 +182,7 @@ def _convert_x(self, x):
cat_mapping[key] = dict(enumerate(distinct_values))
new_values = np.searchsorted(distinct_values, values)
x_converted[key] = DataFrame(new_values, index=df.index,
- columns=df.columns)
+ columns=df.columns)
if len(cat_mapping) == 0:
x_converted = x
@@ -262,7 +262,8 @@ def _add_categorical_dummies(self, panel, cat_mappings):
if dropped_dummy or not self._use_all_dummies:
if effect in self._dropped_dummies:
- to_exclude = mapped_name = self._dropped_dummies.get(effect)
+ to_exclude = mapped_name = self._dropped_dummies.get(
+ effect)
if val_map:
mapped_name = val_map[to_exclude]
@@ -273,7 +274,8 @@ def _add_categorical_dummies(self, panel, cat_mappings):
raise Exception('%s not in %s' % (to_exclude,
dummies.columns))
- self.log('-- Excluding dummy for %s: %s' % (effect, to_exclude))
+ self.log(
+ '-- Excluding dummy for %s: %s' % (effect, to_exclude))
dummies = dummies.filter(dummies.columns - [mapped_name])
dropped_dummy = True
@@ -604,8 +606,8 @@ def _var_beta_raw(self):
xx = xx - cum_xx[i - window]
result = _var_beta_panel(y_slice, x_slice, beta[n], xx, rmse[n],
- cluster_axis, self._nw_lags,
- nobs[n], df[n], self._nw_overlap)
+ cluster_axis, self._nw_lags,
+ nobs[n], df[n], self._nw_overlap)
results.append(result)
@@ -745,7 +747,7 @@ def __init__(self, y, x, window_type='full_sample', window=None,
def _var_beta_panel(y, x, beta, xx, rmse, cluster_axis,
- nw_lags, nobs, df, nw_overlap):
+ nw_lags, nobs, df, nw_overlap):
from pandas.core.frame import group_agg
xx_inv = math.inv(xx)
@@ -777,7 +779,7 @@ def _var_beta_panel(y, x, beta, xx, rmse, cluster_axis,
xox = 0
for i in range(len(x.index.levels[0])):
- xox += math.newey_west(m[i : i + 1], nw_lags,
+ xox += math.newey_west(m[i: i + 1], nw_lags,
nobs, df, nw_overlap)
return np.dot(xx_inv, np.dot(xox, xx_inv))
diff --git a/pandas/stats/tests/__init__.py b/pandas/stats/tests/__init__.py
index 8b137891791fe..e69de29bb2d1d 100644
--- a/pandas/stats/tests/__init__.py
+++ b/pandas/stats/tests/__init__.py
@@ -1 +0,0 @@
-
diff --git a/pandas/stats/tests/common.py b/pandas/stats/tests/common.py
index b2060d30ecd30..2866a36bc435a 100644
--- a/pandas/stats/tests/common.py
+++ b/pandas/stats/tests/common.py
@@ -8,7 +8,7 @@
import numpy as np
from pandas import DataFrame, bdate_range
-from pandas.util.testing import assert_almost_equal # imported in other tests
+from pandas.util.testing import assert_almost_equal # imported in other tests
N = 100
K = 4
@@ -17,13 +17,15 @@
COLS = ['Col' + c for c in string.ascii_uppercase[:K]]
+
def makeDataFrame():
data = DataFrame(np.random.randn(N, K),
- columns=COLS,
- index=DATE_RANGE)
+ columns=COLS,
+ index=DATE_RANGE)
return data
+
def getBasicDatasets():
A = makeDataFrame()
B = makeDataFrame()
@@ -31,12 +33,14 @@ def getBasicDatasets():
return A, B, C
+
def check_for_scipy():
try:
import scipy
except ImportError:
raise nose.SkipTest('no scipy')
+
def check_for_statsmodels():
_have_statsmodels = True
try:
@@ -53,7 +57,6 @@ def setUp(self):
check_for_scipy()
check_for_statsmodels()
-
self.A, self.B, self.C = getBasicDatasets()
self.createData1()
@@ -80,14 +83,14 @@ def createData1(self):
C = C[:30]
self.panel_y = A
- self.panel_x = {'B' : B, 'C' : C}
+ self.panel_x = {'B': B, 'C': C}
self.series_panel_y = A.filter(['ColA'])
- self.series_panel_x = {'B' : B.filter(['ColA']),
- 'C' : C.filter(['ColA'])}
+ self.series_panel_x = {'B': B.filter(['ColA']),
+ 'C': C.filter(['ColA'])}
self.series_y = A['ColA']
- self.series_x = {'B' : B['ColA'],
- 'C' : C['ColA']}
+ self.series_x = {'B': B['ColA'],
+ 'C': C['ColA']}
def createData2(self):
y_data = [[1, np.NaN],
@@ -98,7 +101,7 @@ def createData2(self):
datetime(2000, 1, 3)]
y_cols = ['A', 'B']
self.panel_y2 = DataFrame(np.array(y_data), index=y_index,
- columns=y_cols)
+ columns=y_cols)
x1_data = [[6, np.NaN],
[7, 8],
@@ -110,7 +113,7 @@ def createData2(self):
datetime(2000, 1, 4)]
x1_cols = ['A', 'B']
x1 = DataFrame(np.array(x1_data), index=x1_index,
- columns=x1_cols)
+ columns=x1_cols)
x2_data = [[13, 14, np.NaN],
[15, np.NaN, np.NaN],
@@ -124,9 +127,9 @@ def createData2(self):
datetime(2000, 1, 5)]
x2_cols = ['C', 'A', 'B']
x2 = DataFrame(np.array(x2_data), index=x2_index,
- columns=x2_cols)
+ columns=x2_cols)
- self.panel_x2 = {'x1' : x1, 'x2' : x2}
+ self.panel_x2 = {'x1': x1, 'x2': x2}
def createData3(self):
y_data = [[1, 2],
@@ -135,7 +138,7 @@ def createData3(self):
datetime(2000, 1, 2)]
y_cols = ['A', 'B']
self.panel_y3 = DataFrame(np.array(y_data), index=y_index,
- columns=y_cols)
+ columns=y_cols)
x1_data = [['A', 'B'],
['C', 'A']]
@@ -143,7 +146,7 @@ def createData3(self):
datetime(2000, 1, 2)]
x1_cols = ['A', 'B']
x1 = DataFrame(np.array(x1_data), index=x1_index,
- columns=x1_cols)
+ columns=x1_cols)
x2_data = [['foo', 'bar'],
['baz', 'foo']]
@@ -151,6 +154,6 @@ def createData3(self):
datetime(2000, 1, 2)]
x2_cols = ['A', 'B']
x2 = DataFrame(np.array(x2_data), index=x2_index,
- columns=x2_cols)
+ columns=x2_cols)
- self.panel_x3 = {'x1' : x1, 'x2' : x2}
+ self.panel_x3 = {'x1': x1, 'x2': x2}
diff --git a/pandas/stats/tests/test_fama_macbeth.py b/pandas/stats/tests/test_fama_macbeth.py
index f2ebef5fd53e5..ef262cfaf44bb 100644
--- a/pandas/stats/tests/test_fama_macbeth.py
+++ b/pandas/stats/tests/test_fama_macbeth.py
@@ -4,6 +4,7 @@
import numpy as np
+
class TestFamaMacBeth(BaseTest):
def testFamaMacBethRolling(self):
# self.checkFamaMacBethExtended('rolling', self.panel_x, self.panel_y,
@@ -24,7 +25,7 @@ def checkFamaMacBethExtended(self, window_type, x, y, **kwds):
**kwds)
self._check_stuff_works(result)
- index = result._index
+ index = result._index
time = len(index)
for i in xrange(time - window + 1):
@@ -57,5 +58,5 @@ def _check_stuff_works(self, result):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/stats/tests/test_math.py b/pandas/stats/tests/test_math.py
index 6553023f4f392..92dedb35f4512 100644
--- a/pandas/stats/tests/test_math.py
+++ b/pandas/stats/tests/test_math.py
@@ -25,6 +25,7 @@
except ImportError:
_have_statsmodels = False
+
class TestMath(unittest.TestCase):
_nan_locs = np.arange(20, 40)
@@ -63,5 +64,5 @@ def test_inv_illformed(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index 7d25fe342867b..a155b9ef060ef 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -10,7 +10,7 @@
from pandas import Series, DataFrame, bdate_range, isnull, notnull
from pandas.util.testing import (
assert_almost_equal, assert_series_equal, assert_frame_equal
- )
+)
from pandas.util.py3compat import PY3
import pandas.core.datetools as datetools
import pandas.stats.moments as mom
@@ -18,6 +18,7 @@
N, K = 100, 10
+
class TestMoments(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -157,8 +158,8 @@ def test_cmov_window_special(self):
raise nose.SkipTest
win_types = ['kaiser', 'gaussian', 'general_gaussian', 'slepian']
- kwds = [{'beta' : 1.}, {'std' : 1.}, {'power' : 2., 'width' : 2.},
- {'width' : 0.5}]
+ kwds = [{'beta': 1.}, {'std': 1.}, {'power': 2., 'width': 2.},
+ {'width': 0.5}]
for wt, k in zip(win_types, kwds):
vals = np.random.randn(10)
@@ -174,28 +175,30 @@ def test_rolling_median(self):
def test_rolling_min(self):
self._check_moment_func(mom.rolling_min, np.min)
- a = np.array([1,2,3,4,5])
+ a = np.array([1, 2, 3, 4, 5])
b = mom.rolling_min(a, window=100, min_periods=1)
assert_almost_equal(b, np.ones(len(a)))
- self.assertRaises(ValueError, mom.rolling_min, np.array([1,2,3]), window=3, min_periods=5)
+ self.assertRaises(ValueError, mom.rolling_min, np.array([1,
+ 2, 3]), window=3, min_periods=5)
def test_rolling_max(self):
self._check_moment_func(mom.rolling_max, np.max)
- a = np.array([1,2,3,4,5])
+ a = np.array([1, 2, 3, 4, 5])
b = mom.rolling_max(a, window=100, min_periods=1)
assert_almost_equal(a, b)
- self.assertRaises(ValueError, mom.rolling_max, np.array([1,2,3]), window=3, min_periods=5)
+ self.assertRaises(ValueError, mom.rolling_max, np.array([1,
+ 2, 3]), window=3, min_periods=5)
def test_rolling_quantile(self):
qs = [.1, .5, .9]
def scoreatpercentile(a, per):
- values = np.sort(a,axis=0)
+ values = np.sort(a, axis=0)
- idx = per /1. * (values.shape[0] - 1)
+ idx = per / 1. * (values.shape[0] - 1)
return values[int(idx)]
for q in qs:
@@ -204,6 +207,7 @@ def f(x, window, min_periods=None, freq=None, center=False):
min_periods=min_periods,
freq=freq,
center=center)
+
def alt(x):
return scoreatpercentile(x, q)
@@ -211,7 +215,8 @@ def alt(x):
def test_rolling_apply(self):
ser = Series([])
- assert_series_equal(ser, mom.rolling_apply(ser, 10, lambda x:x.mean()))
+ assert_series_equal(
+ ser, mom.rolling_apply(ser, 10, lambda x: x.mean()))
def roll_mean(x, window, min_periods=None, freq=None, center=False):
return mom.rolling_apply(x, window,
@@ -239,13 +244,13 @@ def test_rolling_std(self):
lambda x: np.std(x, ddof=0))
def test_rolling_std_1obs(self):
- result = mom.rolling_std(np.array([1.,2.,3.,4.,5.]),
+ result = mom.rolling_std(np.array([1., 2., 3., 4., 5.]),
1, min_periods=1)
expected = np.zeros(5)
assert_almost_equal(result, expected)
- result = mom.rolling_std(np.array([np.nan,np.nan,3.,4.,5.]),
+ result = mom.rolling_std(np.array([np.nan, np.nan, 3., 4., 5.]),
3, min_periods=2)
self.assert_(np.isnan(result[2]))
@@ -378,7 +383,6 @@ def _check_ndarray(self, func, static_comp, window=50,
result = func(arr, 50)
assert_almost_equal(result[-1], static_comp(arr[10:-10]))
-
if has_center:
if has_min_periods:
result = func(arr, 20, min_periods=15, center=True)
@@ -458,7 +462,6 @@ def _check_structures(self, func, static_comp,
assert_series_equal(series_xp, series_rs)
assert_frame_equal(frame_xp, frame_rs)
-
def test_legacy_time_rule_arg(self):
from StringIO import StringIO
# suppress deprecation warnings
@@ -624,9 +627,9 @@ def test_expanding_apply(self):
def expanding_mean(x, min_periods=1, freq=None):
return mom.expanding_apply(x,
- lambda x: x.mean(),
- min_periods=min_periods,
- freq=freq)
+ lambda x: x.mean(),
+ min_periods=min_periods,
+ freq=freq)
self._check_expanding(expanding_mean, np.mean)
def test_expanding_corr(self):
@@ -728,5 +731,5 @@ def _check_expanding(self, func, static_comp, has_min_periods=True,
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/stats/tests/test_ols.py b/pandas/stats/tests/test_ols.py
index 60ec676439db6..ebdb8e178d03b 100644
--- a/pandas/stats/tests/test_ols.py
+++ b/pandas/stats/tests/test_ols.py
@@ -33,10 +33,12 @@
except ImportError:
_have_statsmodels = False
+
def _check_repr(obj):
repr(obj)
str(obj)
+
def _compare_ols_results(model1, model2):
assert(type(model1) == type(model2))
@@ -45,12 +47,15 @@ def _compare_ols_results(model1, model2):
else:
_compare_fullsample_ols(model1, model2)
+
def _compare_fullsample_ols(model1, model2):
assert_series_equal(model1.beta, model2.beta)
+
def _compare_moving_ols(model1, model2):
assert_frame_equal(model1.beta, model2.beta)
+
class TestOLS(BaseTest):
_multiprocess_can_split_ = True
@@ -87,7 +92,8 @@ def testOLSWithDatasets_scotland(self):
self.checkDataSet(sm.datasets.scotland.load())
# degenerate case fails on some platforms
- # self.checkDataSet(datasets.ccard.load(), 39, 49) # one col in X all 0s
+ # self.checkDataSet(datasets.ccard.load(), 39, 49) # one col in X all
+ # 0s
def testWLS(self):
# WLS centered SS changed (fixed) in 0.5.0
@@ -105,7 +111,7 @@ def testWLS(self):
self._check_wls(X, Y, weights)
def _check_wls(self, x, y, weights):
- result = ols(y=y, x=x, weights=1/weights)
+ result = ols(y=y, x=x, weights=1 / weights)
combined = x.copy()
combined['__y__'] = y
@@ -116,7 +122,7 @@ def _check_wls(self, x, y, weights):
aweights = combined.pop('__weights__').values
exog = sm.add_constant(combined.values, prepend=False)
- sm_result = sm.WLS(endog, exog, weights=1/aweights).fit()
+ sm_result = sm.WLS(endog, exog, weights=1 / aweights).fit()
assert_almost_equal(sm_result.params, result._beta_raw)
assert_almost_equal(sm_result.resid, result._resid_raw)
@@ -125,8 +131,8 @@ def _check_wls(self, x, y, weights):
self.checkMovingOLS('expanding', x, y, weights=weights)
def checkDataSet(self, dataset, start=None, end=None, skip_moving=False):
- exog = dataset.exog[start : end]
- endog = dataset.endog[start : end]
+ exog = dataset.exog[start: end]
+ endog = dataset.endog[start: end]
x = DataFrame(exog, index=np.arange(exog.shape[0]),
columns=np.arange(exog.shape[1]))
y = Series(endog, index=np.arange(len(endog)))
@@ -241,6 +247,7 @@ def test_ols_object_dtype(self):
model = ols(y=df[0], x=df[1])
summary = repr(model)
+
class TestOLSMisc(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -323,7 +330,7 @@ def test_predict(self):
x3 = x2 + 10
pred3 = model1.predict(x=x3)
x3['intercept'] = 1.
- x3 = x3.reindex(columns = model1.beta.index)
+ x3 = x3.reindex(columns=model1.beta.index)
expected = Series(np.dot(x3.values, model1.beta.values), x3.index)
assert_series_equal(expected, pred3)
@@ -332,13 +339,13 @@ def test_predict(self):
assert_series_equal(Series(0., pred4.index), pred4)
def test_predict_longer_exog(self):
- exogenous = {"1998": "4760","1999": "5904","2000": "4504",
- "2001": "9808","2002": "4241","2003": "4086",
- "2004": "4687","2005": "7686","2006": "3740",
- "2007": "3075","2008": "3753","2009": "4679",
- "2010": "5468","2011": "7154","2012": "4292",
- "2013": "4283","2014": "4595","2015": "9194",
- "2016": "4221","2017": "4520"}
+ exogenous = {"1998": "4760", "1999": "5904", "2000": "4504",
+ "2001": "9808", "2002": "4241", "2003": "4086",
+ "2004": "4687", "2005": "7686", "2006": "3740",
+ "2007": "3075", "2008": "3753", "2009": "4679",
+ "2010": "5468", "2011": "7154", "2012": "4292",
+ "2013": "4283", "2014": "4595", "2015": "9194",
+ "2016": "4221", "2017": "4520"}
endogenous = {"1998": "691", "1999": "1580", "2000": "80",
"2001": "1450", "2002": "555", "2003": "956",
"2004": "877", "2005": "614", "2006": "468",
@@ -365,7 +372,7 @@ def test_series_rhs(self):
y = tm.makeTimeSeries()
x = tm.makeTimeSeries()
model = ols(y=y, x=x)
- expected = ols(y=y, x={'x' : x})
+ expected = ols(y=y, x={'x': x})
assert_series_equal(model.beta, expected.beta)
def test_various_attributes(self):
@@ -389,13 +396,13 @@ def test_catch_regressor_overlap(self):
df2 = tm.makeTimeDataFrame().ix[:, ['B', 'C', 'D']]
y = tm.makeTimeSeries()
- data = {'foo' : df1, 'bar' : df2}
+ data = {'foo': df1, 'bar': df2}
self.assertRaises(Exception, ols, y=y, x=data)
def test_plm_ctor(self):
y = tm.makeTimeDataFrame()
- x = {'a' : tm.makeTimeDataFrame(),
- 'b' : tm.makeTimeDataFrame()}
+ x = {'a': tm.makeTimeDataFrame(),
+ 'b': tm.makeTimeDataFrame()}
model = ols(y=y, x=x, intercept=False)
model.summary
@@ -405,8 +412,8 @@ def test_plm_ctor(self):
def test_plm_attrs(self):
y = tm.makeTimeDataFrame()
- x = {'a' : tm.makeTimeDataFrame(),
- 'b' : tm.makeTimeDataFrame()}
+ x = {'a': tm.makeTimeDataFrame(),
+ 'b': tm.makeTimeDataFrame()}
rmodel = ols(y=y, x=x, window=10)
model = ols(y=y, x=x)
@@ -415,16 +422,16 @@ def test_plm_attrs(self):
def test_plm_lagged_y_predict(self):
y = tm.makeTimeDataFrame()
- x = {'a' : tm.makeTimeDataFrame(),
- 'b' : tm.makeTimeDataFrame()}
+ x = {'a': tm.makeTimeDataFrame(),
+ 'b': tm.makeTimeDataFrame()}
model = ols(y=y, x=x, window=10)
result = model.lagged_y_predict(2)
def test_plm_f_test(self):
y = tm.makeTimeDataFrame()
- x = {'a' : tm.makeTimeDataFrame(),
- 'b' : tm.makeTimeDataFrame()}
+ x = {'a': tm.makeTimeDataFrame(),
+ 'b': tm.makeTimeDataFrame()}
model = ols(y=y, x=x)
@@ -438,14 +445,15 @@ def test_plm_f_test(self):
def test_plm_exclude_dummy_corner(self):
y = tm.makeTimeDataFrame()
- x = {'a' : tm.makeTimeDataFrame(),
- 'b' : tm.makeTimeDataFrame()}
+ x = {'a': tm.makeTimeDataFrame(),
+ 'b': tm.makeTimeDataFrame()}
- model = ols(y=y, x=x, entity_effects=True, dropped_dummies={'entity' : 'D'})
+ model = ols(
+ y=y, x=x, entity_effects=True, dropped_dummies={'entity': 'D'})
model.summary
self.assertRaises(Exception, ols, y=y, x=x, entity_effects=True,
- dropped_dummies={'entity' : 'E'})
+ dropped_dummies={'entity': 'E'})
def test_columns_tuples_summary(self):
# #1837
@@ -456,6 +464,7 @@ def test_columns_tuples_summary(self):
model = ols(y=Y, x=X)
model.summary
+
class TestPanelOLS(BaseTest):
_multiprocess_can_split_ = True
@@ -473,7 +482,8 @@ def testFiltering(self):
index = x.index.get_level_values(0)
index = Index(sorted(set(index)))
exp_index = Index([datetime(2000, 1, 1), datetime(2000, 1, 3)])
- self.assertTrue;(exp_index.equals(index))
+ self.assertTrue
+ (exp_index.equals(index))
index = x.index.get_level_values(1)
index = Index(sorted(set(index)))
@@ -507,8 +517,8 @@ def testFiltering(self):
def test_wls_panel(self):
y = tm.makeTimeDataFrame()
- x = Panel({'x1' : tm.makeTimeDataFrame(),
- 'x2' : tm.makeTimeDataFrame()})
+ x = Panel({'x1': tm.makeTimeDataFrame(),
+ 'x2': tm.makeTimeDataFrame()})
y.ix[[1, 7], 'A'] = np.nan
y.ix[[6, 15], 'B'] = np.nan
@@ -517,7 +527,7 @@ def test_wls_panel(self):
stack_y = y.stack()
stack_x = DataFrame(dict((k, v.stack())
- for k, v in x.iterkv()))
+ for k, v in x.iterkv()))
weights = x.std('items')
stack_weights = weights.stack()
@@ -526,8 +536,8 @@ def test_wls_panel(self):
stack_x.index = stack_x.index._tuple_index
stack_weights.index = stack_weights.index._tuple_index
- result = ols(y=y, x=x, weights=1/weights)
- expected = ols(y=stack_y, x=stack_x, weights=1/stack_weights)
+ result = ols(y=y, x=x, weights=1 / weights)
+ expected = ols(y=stack_y, x=stack_x, weights=1 / stack_weights)
assert_almost_equal(result.beta, expected.beta)
@@ -560,7 +570,7 @@ def testWithEntityEffects(self):
def testWithEntityEffectsAndDroppedDummies(self):
result = ols(y=self.panel_y2, x=self.panel_x2, entity_effects=True,
- dropped_dummies={'entity' : 'B'})
+ dropped_dummies={'entity': 'B'})
assert_almost_equal(result._y.values.flat, [1, 4, 5])
exp_x = DataFrame([[1., 6., 14., 1.], [1, 9, 17, 1], [0, 30, 48, 1]],
@@ -583,7 +593,7 @@ def testWithXEffects(self):
def testWithXEffectsAndDroppedDummies(self):
result = ols(y=self.panel_y2, x=self.panel_x2, x_effects=['x1'],
- dropped_dummies={'x1' : 30})
+ dropped_dummies={'x1': 30})
res = result._x
assert_almost_equal(result._y.values.flat, [1, 4, 5])
@@ -608,7 +618,7 @@ def testWithXEffectsAndConversion(self):
def testWithXEffectsAndConversionAndDroppedDummies(self):
result = ols(y=self.panel_y3, x=self.panel_x3, x_effects=['x1', 'x2'],
- dropped_dummies={'x2' : 'foo'})
+ dropped_dummies={'x2': 'foo'})
assert_almost_equal(result._y.values.flat, [1, 2, 3, 4])
exp_x = [[0, 0, 0, 0, 1], [1, 0, 1, 0, 1], [0, 1, 0, 1, 1],
@@ -631,7 +641,6 @@ def testForSeries(self):
self.series_x, self.series_y, nw_lags=1,
nw_overlap=True)
-
def testRolling(self):
self.checkMovingOLS(self.panel_x, self.panel_y)
@@ -671,7 +680,8 @@ def testRollingWithNeweyWestAndTimeEffectsAndEntityCluster(self):
time_effects=True)
def testExpanding(self):
- self.checkMovingOLS(self.panel_x, self.panel_y, window_type='expanding')
+ self.checkMovingOLS(
+ self.panel_x, self.panel_y, window_type='expanding')
def testNonPooled(self):
self.checkNonPooled(y=self.panel_y, x=self.panel_x)
@@ -762,6 +772,7 @@ def test_auto_rolling_window_type(self):
assert_frame_equal(window_model.beta, rolling_model.beta)
+
def _check_non_raw_results(model):
_check_repr(model)
_check_repr(model.resid)
@@ -769,6 +780,7 @@ def _check_non_raw_results(model):
_check_repr(model.y_fitted)
_check_repr(model.y_predict)
+
def _period_slice(panelModel, i):
index = panelModel._x_trans.index
period = index.levels[0][i]
@@ -777,6 +789,7 @@ def _period_slice(panelModel, i):
return slice(L, R)
+
class TestOLSFilter(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -802,29 +815,29 @@ def setUp(self):
ts = Series([np.nan, 5, 8, 9, 7], index=date_index)
self.TS4 = ts
- data = {'x1' : self.TS2, 'x2' : self.TS4}
+ data = {'x1': self.TS2, 'x2': self.TS4}
self.DF1 = DataFrame(data=data)
- data = {'x1' : self.TS2, 'x2' : self.TS4}
+ data = {'x1': self.TS2, 'x2': self.TS4}
self.DICT1 = data
def testFilterWithSeriesRHS(self):
(lhs, rhs, weights, rhs_pre,
- index, valid) = _filter_data(self.TS1, {'x1' : self.TS2}, None)
+ index, valid) = _filter_data(self.TS1, {'x1': self.TS2}, None)
self.tsAssertEqual(self.TS1, lhs)
self.tsAssertEqual(self.TS2[:3], rhs['x1'])
self.tsAssertEqual(self.TS2, rhs_pre['x1'])
def testFilterWithSeriesRHS2(self):
(lhs, rhs, weights, rhs_pre,
- index, valid) = _filter_data(self.TS2, {'x1' : self.TS1}, None)
+ index, valid) = _filter_data(self.TS2, {'x1': self.TS1}, None)
self.tsAssertEqual(self.TS2[:3], lhs)
self.tsAssertEqual(self.TS1, rhs['x1'])
self.tsAssertEqual(self.TS1, rhs_pre['x1'])
def testFilterWithSeriesRHS3(self):
(lhs, rhs, weights, rhs_pre,
- index, valid) = _filter_data(self.TS3, {'x1' : self.TS4}, None)
+ index, valid) = _filter_data(self.TS3, {'x1': self.TS4}, None)
exp_lhs = self.TS3[2:3]
exp_rhs = self.TS4[2:3]
exp_rhs_pre = self.TS4[1:]
@@ -834,7 +847,7 @@ def testFilterWithSeriesRHS3(self):
def testFilterWithDataFrameRHS(self):
(lhs, rhs, weights, rhs_pre,
- index, valid) = _filter_data(self.TS1, self.DF1, None)
+ index, valid) = _filter_data(self.TS1, self.DF1, None)
exp_lhs = self.TS1[1:]
exp_rhs1 = self.TS2[1:3]
exp_rhs2 = self.TS4[1:3]
@@ -844,7 +857,7 @@ def testFilterWithDataFrameRHS(self):
def testFilterWithDictRHS(self):
(lhs, rhs, weights, rhs_pre,
- index, valid) = _filter_data(self.TS1, self.DICT1, None)
+ index, valid) = _filter_data(self.TS1, self.DICT1, None)
exp_lhs = self.TS1[1:]
exp_rhs1 = self.TS2[1:3]
exp_rhs2 = self.TS4[1:3]
@@ -858,5 +871,5 @@ def tsAssertEqual(self, ts1, ts2):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/stats/tests/test_var.py b/pandas/stats/tests/test_var.py
index b48fc5bb0cdc4..282a794980979 100644
--- a/pandas/stats/tests/test_var.py
+++ b/pandas/stats/tests/test_var.py
@@ -35,6 +35,7 @@
DECIMAL_3 = 3
DECIMAL_2 = 2
+
class CheckVAR(object):
def test_params(self):
assert_almost_equal(self.res1.params, self.res2.params, DECIMAL_3)
@@ -51,21 +52,21 @@ def test_df_eq(self):
def test_rmse(self):
results = self.res1.results
for i in range(len(results)):
- assert_almost_equal(results[i].mse_resid**.5,
- eval('self.res2.rmse_'+str(i+1)), DECIMAL_6)
+ assert_almost_equal(results[i].mse_resid ** .5,
+ eval('self.res2.rmse_' + str(i + 1)), DECIMAL_6)
def test_rsquared(self):
results = self.res1.results
for i in range(len(results)):
assert_almost_equal(results[i].rsquared,
- eval('self.res2.rsquared_'+str(i+1)), DECIMAL_3)
+ eval('self.res2.rsquared_' + str(i + 1)), DECIMAL_3)
def test_llf(self):
results = self.res1.results
assert_almost_equal(self.res1.llf, self.res2.llf, DECIMAL_2)
for i in range(len(results)):
assert_almost_equal(results[i].llf,
- eval('self.res2.llf_'+str(i+1)), DECIMAL_2)
+ eval('self.res2.llf_' + str(i + 1)), DECIMAL_2)
def test_aic(self):
assert_almost_equal(self.res1.aic, self.res2.aic)
@@ -89,8 +90,8 @@ def test_bse(self):
class Foo(object):
def __init__(self):
data = sm.datasets.macrodata.load()
- data = data.data[['realinv','realgdp','realcons']].view((float,3))
- data = diff(log(data),axis=0)
+ data = data.data[['realinv', 'realgdp', 'realcons']].view((float, 3))
+ data = diff(log(data), axis=0)
self.res1 = VAR2(endog=data).fit(maxlag=2)
from results import results_var
self.res2 = results_var.MacrodataResults()
@@ -137,14 +138,15 @@ def plot(self, names=None):
def serial_test(self, lags_pt=16, type='PT.asymptotic'):
f = r['serial.test']
- test = f(self._estimate, **{'lags.pt' : lags_pt,
- 'type' : type})
+ test = f(self._estimate, **{'lags.pt': lags_pt,
+ 'type': type})
return test
def data_summary(self):
print r.summary(self.rdata)
+
class TestVAR(TestCase):
def setUp(self):
diff --git a/pandas/stats/var.py b/pandas/stats/var.py
index a4eb8920a3b40..9390eef95700a 100644
--- a/pandas/stats/var.py
+++ b/pandas/stats/var.py
@@ -129,7 +129,8 @@ def granger_causality(self):
for col in self._columns:
d[col] = {}
for i in xrange(1, 1 + self._p):
- lagged_data = self._lagged_data[i].filter(self._columns - [col])
+ lagged_data = self._lagged_data[i].filter(
+ self._columns - [col])
for key, value in lagged_data.iteritems():
d[col][_make_param_name(i, key)] = value
@@ -312,9 +313,9 @@ def _data_xs(self, i):
def _forecast_cov_raw(self, n):
resid = self._forecast_cov_resid_raw(n)
- #beta = self._forecast_cov_beta_raw(n)
+ # beta = self._forecast_cov_beta_raw(n)
- #return [a + b for a, b in izip(resid, beta)]
+ # return [a + b for a, b in izip(resid, beta)]
# TODO: ignore the beta forecast std err until it's verified
return resid
@@ -428,7 +429,7 @@ def _lag_betas(self):
"""
k = self._k
b = self._beta_raw
- return [b[k * i : k * (i + 1)].T for i in xrange(self._p)]
+ return [b[k * i: k * (i + 1)].T for i in xrange(self._p)]
@cache_readonly
def _lagged_data(self):
diff --git a/pandas/tests/__init__.py b/pandas/tests/__init__.py
index 8b137891791fe..e69de29bb2d1d 100644
--- a/pandas/tests/__init__.py
+++ b/pandas/tests/__init__.py
@@ -1 +0,0 @@
-
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 3bf64e5d3d8ce..8706bb9cf7f4f 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -11,6 +11,7 @@
class TestMatch(unittest.TestCase):
_multiprocess_can_split_ = True
+
def test_ints(self):
values = np.array([0, 2, 1])
to_match = np.array([0, 1, 2, 2, 0, 1, 3, 0])
@@ -30,6 +31,7 @@ def test_strings(self):
class TestUnique(unittest.TestCase):
_multiprocess_can_split_ = True
+
def test_ints(self):
arr = np.random.randint(0, 100, size=50)
@@ -44,7 +46,8 @@ def test_objects(self):
def test_object_refcount_bug(self):
lst = ['A', 'B', 'C', 'D', 'E']
- for i in xrange(1000): len(algos.unique(lst))
+ for i in xrange(1000):
+ len(algos.unique(lst))
def test_on_index_object(self):
mindex = pd.MultiIndex.from_arrays([np.arange(5).repeat(5),
@@ -59,6 +62,7 @@ def test_on_index_object(self):
tm.assert_almost_equal(result, expected)
+
def test_quantile():
s = Series(np.random.randn(100))
@@ -68,5 +72,5 @@ def test_quantile():
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 1562941ec474e..5b1d6c31403cb 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -17,20 +17,22 @@
_multiprocess_can_split_ = True
+
def test_is_sequence():
- is_seq=com._is_sequence
- assert(is_seq((1,2)))
- assert(is_seq([1,2]))
+ is_seq = com._is_sequence
+ assert(is_seq((1, 2)))
+ assert(is_seq([1, 2]))
assert(not is_seq("abcd"))
assert(not is_seq(u"abcd"))
assert(not is_seq(np.int64))
+
def test_notnull():
assert notnull(1.)
assert not notnull(None)
assert not notnull(np.NaN)
- with cf.option_context("mode.use_inf_as_null",False):
+ with cf.option_context("mode.use_inf_as_null", False):
assert notnull(np.inf)
assert notnull(-np.inf)
@@ -38,7 +40,7 @@ def test_notnull():
result = notnull(arr)
assert result.all()
- with cf.option_context("mode.use_inf_as_null",True):
+ with cf.option_context("mode.use_inf_as_null", True):
assert not notnull(np.inf)
assert not notnull(-np.inf)
@@ -46,12 +48,13 @@ def test_notnull():
result = notnull(arr)
assert result.sum() == 2
- with cf.option_context("mode.use_inf_as_null",False):
+ with cf.option_context("mode.use_inf_as_null", False):
float_series = Series(np.random.randn(5))
obj_series = Series(np.random.randn(5), dtype=object)
assert(isinstance(notnull(float_series), Series))
assert(isinstance(notnull(obj_series), Series))
+
def test_isnull():
assert not isnull(1.)
assert isnull(None)
@@ -77,7 +80,7 @@ def test_isnull_lists():
exp = np.array([[False]])
assert(np.array_equal(result, exp))
- result = isnull([[1],[2]])
+ result = isnull([[1], [2]])
exp = np.array([[False], [False]])
assert(np.array_equal(result, exp))
@@ -103,19 +106,23 @@ def test_isnull_datetime():
assert(mask[0])
assert(not mask[1:].any())
+
def test_any_none():
assert(com._any_none(1, 2, 3, None))
assert(not com._any_none(1, 2, 3, 4))
+
def test_all_not_none():
assert(com._all_not_none(1, 2, 3, 4))
assert(not com._all_not_none(1, 2, 3, None))
assert(not com._all_not_none(None, None, None, None))
+
def test_rands():
r = com.rands(10)
assert(len(r) == 10)
+
def test_adjoin():
data = [['a', 'b', 'c'],
['dd', 'ee', 'ff'],
@@ -126,6 +133,7 @@ def test_adjoin():
assert(adjoined == expected)
+
def test_iterpairs():
data = [1, 2, 3, 4]
expected = [(1, 2),
@@ -136,17 +144,18 @@ def test_iterpairs():
assert(result == expected)
+
def test_split_ranges():
def _bin(x, width):
"return int(x) as a base2 string of given width"
- return ''.join(str((x>>i)&1) for i in xrange(width-1,-1,-1))
+ return ''.join(str((x >> i) & 1) for i in xrange(width - 1, -1, -1))
def test_locs(mask):
nfalse = sum(np.array(mask) == 0)
- remaining=0
+ remaining = 0
for s, e in com.split_ranges(mask):
- remaining += e-s
+ remaining += e - s
assert 0 not in mask[s:e]
@@ -154,10 +163,10 @@ def test_locs(mask):
assert remaining + nfalse == len(mask)
# exhaustively test all possible mask sequences of length 8
- ncols=8
- for i in range(2**ncols):
- cols=map(int,list(_bin(i,ncols))) # count up in base2
- mask=[cols[i] == 1 for i in range(len(cols))]
+ ncols = 8
+ for i in range(2 ** ncols):
+ cols = map(int, list(_bin(i, ncols))) # count up in base2
+ mask = [cols[i] == 1 for i in range(len(cols))]
test_locs(mask)
# base cases
@@ -165,24 +174,28 @@ def test_locs(mask):
test_locs([0])
test_locs([1])
+
def test_indent():
s = 'a b c\nd e f'
result = com.indent(s, spaces=6)
assert(result == ' a b c\n d e f')
+
def test_banner():
ban = com.banner('hi')
assert(ban == ('%s\nhi\n%s' % ('=' * 80, '=' * 80)))
+
def test_map_indices_py():
data = [4, 3, 2, 1]
- expected = {4 : 0, 3 : 1, 2 : 2, 1 : 3}
+ expected = {4: 0, 3: 1, 2: 2, 1: 3}
result = com.map_indices_py(data)
assert(result == expected)
+
def test_union():
a = [1, 2, 3]
b = [4, 5, 6]
@@ -191,6 +204,7 @@ def test_union():
assert((a + b) == union)
+
def test_difference():
a = [1, 2, 3]
b = [1, 2, 3, 4, 5, 6]
@@ -199,6 +213,7 @@ def test_difference():
assert([4, 5, 6] == inter)
+
def test_intersection():
a = [1, 2, 3]
b = [1, 2, 3, 4, 5, 6]
@@ -207,17 +222,19 @@ def test_intersection():
assert(a == inter)
+
def test_groupby():
values = ['foo', 'bar', 'baz', 'baz2', 'qux', 'foo3']
- expected = {'f' : ['foo', 'foo3'],
- 'b' : ['bar', 'baz', 'baz2'],
- 'q' : ['qux']}
+ expected = {'f': ['foo', 'foo3'],
+ 'b': ['bar', 'baz', 'baz2'],
+ 'q': ['qux']}
grouped = com.groupby(values, lambda x: x[0])
for k, v in grouped:
assert v == expected[k]
+
def test_ensure_int32():
values = np.arange(10, dtype=np.int32)
result = com._ensure_int32(values)
@@ -242,26 +259,26 @@ def test_ensure_int32():
# expected = u"\u05d0".encode('utf-8')
# assert (result == expected)
+
def test_pprint_thing():
if py3compat.PY3:
raise nose.SkipTest
- pp_t=com.pprint_thing
+ pp_t = com.pprint_thing
- assert(pp_t('a')==u'a')
- assert(pp_t(u'a')==u'a')
- assert(pp_t(None)=='')
- assert(pp_t(u'\u05d0')==u'\u05d0')
- assert(pp_t((u'\u05d0',u'\u05d1'))==u'(\u05d0, \u05d1)')
- assert(pp_t((u'\u05d0',(u'\u05d1',u'\u05d2')))==
+ assert(pp_t('a') == u'a')
+ assert(pp_t(u'a') == u'a')
+ assert(pp_t(None) == '')
+ assert(pp_t(u'\u05d0') == u'\u05d0')
+ assert(pp_t((u'\u05d0', u'\u05d1')) == u'(\u05d0, \u05d1)')
+ assert(pp_t((u'\u05d0', (u'\u05d1', u'\u05d2'))) ==
u'(\u05d0, (\u05d1, \u05d2))')
- assert(pp_t(('foo',u'\u05d0',(u'\u05d0',u'\u05d0')))==
+ assert(pp_t(('foo', u'\u05d0', (u'\u05d0', u'\u05d0'))) ==
u'(foo, \u05d0, (\u05d0, \u05d0))')
-
# escape embedded tabs in string
# GH #2038
- assert not "\t" in pp_t("a\tb",escape_chars=("\t",))
+ assert not "\t" in pp_t("a\tb", escape_chars=("\t",))
class TestTake(unittest.TestCase):
@@ -391,7 +408,7 @@ def test_2d_float32(self):
# test with float64 out buffer
out = np.empty((len(indexer), arr.shape[1]), dtype='f8')
- com.take_2d(arr, indexer, out=out) # it works!
+ com.take_2d(arr, indexer, out=out) # it works!
# axis=1
result = com.take_2d(arr, indexer, axis=1)
@@ -404,5 +421,5 @@ def test_2d_float32(self):
tm.assert_almost_equal(result, expected)
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 716402689b70f..11800cfc4e38f 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -6,31 +6,34 @@
import warnings
import nose
+
class TestConfig(unittest.TestCase):
_multiprocess_can_split_ = True
- def __init__(self,*args):
- super(TestConfig,self).__init__(*args)
+
+ def __init__(self, *args):
+ super(TestConfig, self).__init__(*args)
from copy import deepcopy
self.cf = pd.core.config
- self.gc=deepcopy(getattr(self.cf, '_global_config'))
- self.do=deepcopy(getattr(self.cf, '_deprecated_options'))
- self.ro=deepcopy(getattr(self.cf, '_registered_options'))
+ self.gc = deepcopy(getattr(self.cf, '_global_config'))
+ self.do = deepcopy(getattr(self.cf, '_deprecated_options'))
+ self.ro = deepcopy(getattr(self.cf, '_registered_options'))
def setUp(self):
setattr(self.cf, '_global_config', {})
- setattr(self.cf, 'options', self.cf.DictWrapper(self.cf._global_config))
+ setattr(
+ self.cf, 'options', self.cf.DictWrapper(self.cf._global_config))
setattr(self.cf, '_deprecated_options', {})
setattr(self.cf, '_registered_options', {})
def tearDown(self):
- setattr(self.cf, '_global_config',self.gc)
+ setattr(self.cf, '_global_config', self.gc)
setattr(self.cf, '_deprecated_options', self.do)
setattr(self.cf, '_registered_options', self.ro)
def test_api(self):
- #the pandas object exposes the user API
+ # the pandas object exposes the user API
self.assertTrue(hasattr(pd, 'get_option'))
self.assertTrue(hasattr(pd, 'set_option'))
self.assertTrue(hasattr(pd, 'reset_option'))
@@ -49,11 +52,10 @@ def test_register_option(self):
'doc')
# no python keywords
- self.assertRaises(ValueError, self.cf.register_option, 'for',0)
+ self.assertRaises(ValueError, self.cf.register_option, 'for', 0)
# must be valid identifier (ensure attribute access works)
self.assertRaises(ValueError, self.cf.register_option,
- 'Oh my Goddess!',0)
-
+ 'Oh my Goddess!', 0)
# we can register options several levels deep
# without predefining the intermediate steps
@@ -71,35 +73,44 @@ def test_describe_option(self):
self.cf.register_option('c.d.e2', 1, 'doc4')
self.cf.register_option('f', 1)
self.cf.register_option('g.h', 1)
- self.cf.deprecate_option('g.h',rkey="blah")
+ self.cf.deprecate_option('g.h', rkey="blah")
# non-existent keys raise KeyError
self.assertRaises(KeyError, self.cf.describe_option, 'no.such.key')
# we can get the description for any key we registered
- self.assertTrue('doc' in self.cf.describe_option('a',_print_desc=False))
- self.assertTrue('doc2' in self.cf.describe_option('b',_print_desc=False))
- self.assertTrue('precated' in self.cf.describe_option('b',_print_desc=False))
-
- self.assertTrue('doc3' in self.cf.describe_option('c.d.e1',_print_desc=False))
- self.assertTrue('doc4' in self.cf.describe_option('c.d.e2',_print_desc=False))
+ self.assertTrue(
+ 'doc' in self.cf.describe_option('a', _print_desc=False))
+ self.assertTrue(
+ 'doc2' in self.cf.describe_option('b', _print_desc=False))
+ self.assertTrue(
+ 'precated' in self.cf.describe_option('b', _print_desc=False))
+
+ self.assertTrue(
+ 'doc3' in self.cf.describe_option('c.d.e1', _print_desc=False))
+ self.assertTrue(
+ 'doc4' in self.cf.describe_option('c.d.e2', _print_desc=False))
# if no doc is specified we get a default message
# saying "description not available"
- self.assertTrue('vailable' in self.cf.describe_option('f',_print_desc=False))
- self.assertTrue('vailable' in self.cf.describe_option('g.h',_print_desc=False))
- self.assertTrue('precated' in self.cf.describe_option('g.h',_print_desc=False))
- self.assertTrue('blah' in self.cf.describe_option('g.h',_print_desc=False))
+ self.assertTrue(
+ 'vailable' in self.cf.describe_option('f', _print_desc=False))
+ self.assertTrue(
+ 'vailable' in self.cf.describe_option('g.h', _print_desc=False))
+ self.assertTrue(
+ 'precated' in self.cf.describe_option('g.h', _print_desc=False))
+ self.assertTrue(
+ 'blah' in self.cf.describe_option('g.h', _print_desc=False))
def test_case_insensitive(self):
self.cf.register_option('KanBAN', 1, 'doc')
- self.assertTrue('doc' in self.cf.describe_option('kanbaN',_print_desc=False))
+ self.assertTrue(
+ 'doc' in self.cf.describe_option('kanbaN', _print_desc=False))
self.assertEqual(self.cf.get_option('kanBaN'), 1)
- self.cf.set_option('KanBan',2)
+ self.cf.set_option('KanBan', 2)
self.assertEqual(self.cf.get_option('kAnBaN'), 2)
-
# gets of non-existent keys fail
self.assertRaises(KeyError, self.cf.get_option, 'no_such_option')
self.cf.deprecate_option('KanBan')
@@ -149,7 +160,8 @@ def test_validation(self):
self.cf.set_option('a', 2) # int is_int
self.cf.set_option('b.c', 'wurld') # str is_str
- self.assertRaises(ValueError, self.cf.set_option, 'a', None) # None not is_int
+ self.assertRaises(
+ ValueError, self.cf.set_option, 'a', None) # None not is_int
self.assertRaises(ValueError, self.cf.set_option, 'a', 'ab')
self.assertRaises(ValueError, self.cf.set_option, 'b.c', 1)
@@ -188,13 +200,13 @@ def test_reset_option_all(self):
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b.c'), 'hullo')
-
def test_deprecate_option(self):
import sys
- self.cf.deprecate_option('foo') # we can deprecate non-existent options
+ self.cf.deprecate_option(
+ 'foo') # we can deprecate non-existent options
# testing warning with catch_warning was only added in 2.6
- if sys.version_info[:2]<(2,6):
+ if sys.version_info[:2] < (2, 6):
raise nose.SkipTest()
self.assertTrue(self.cf._is_deprecated('foo'))
@@ -208,7 +220,8 @@ def test_deprecate_option(self):
self.fail("Nonexistent option didn't raise KeyError")
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue('deprecated' in str(w[-1])) # we get the default message
+ self.assertTrue(
+ 'deprecated' in str(w[-1])) # we get the default message
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
self.cf.register_option('b.c', 'hullo', 'doc2')
@@ -220,10 +233,13 @@ def test_deprecate_option(self):
self.cf.get_option('a')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue('eprecated' in str(w[-1])) # we get the default message
- self.assertTrue('nifty_ver' in str(w[-1])) # with the removal_ver quoted
+ self.assertTrue(
+ 'eprecated' in str(w[-1])) # we get the default message
+ self.assertTrue(
+ 'nifty_ver' in str(w[-1])) # with the removal_ver quoted
- self.assertRaises(KeyError, self.cf.deprecate_option, 'a') # can't depr. twice
+ self.assertRaises(
+ KeyError, self.cf.deprecate_option, 'a') # can't depr. twice
self.cf.deprecate_option('b.c', 'zounds!')
with warnings.catch_warnings(record=True) as w:
@@ -231,7 +247,8 @@ def test_deprecate_option(self):
self.cf.get_option('b.c')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue('zounds!' in str(w[-1])) # we get the custom message
+ self.assertTrue(
+ 'zounds!' in str(w[-1])) # we get the custom message
# test rerouting keys
self.cf.register_option('d.a', 'foo', 'doc2')
@@ -245,38 +262,43 @@ def test_deprecate_option(self):
self.assertEqual(self.cf.get_option('d.dep'), 'foo')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue('eprecated' in str(w[-1])) # we get the custom message
+ self.assertTrue(
+ 'eprecated' in str(w[-1])) # we get the custom message
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
self.cf.set_option('d.dep', 'baz') # should overwrite "d.a"
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue('eprecated' in str(w[-1])) # we get the custom message
+ self.assertTrue(
+ 'eprecated' in str(w[-1])) # we get the custom message
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
self.assertEqual(self.cf.get_option('d.dep'), 'baz')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue('eprecated' in str(w[-1])) # we get the custom message
+ self.assertTrue(
+ 'eprecated' in str(w[-1])) # we get the custom message
def test_config_prefix(self):
with self.cf.config_prefix("base"):
- self.cf.register_option('a',1,"doc1")
- self.cf.register_option('b',2,"doc2")
+ self.cf.register_option('a', 1, "doc1")
+ self.cf.register_option('b', 2, "doc2")
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b'), 2)
- self.cf.set_option('a',3)
- self.cf.set_option('b',4)
+ self.cf.set_option('a', 3)
+ self.cf.set_option('b', 4)
self.assertEqual(self.cf.get_option('a'), 3)
self.assertEqual(self.cf.get_option('b'), 4)
self.assertEqual(self.cf.get_option('base.a'), 3)
self.assertEqual(self.cf.get_option('base.b'), 4)
- self.assertTrue('doc1' in self.cf.describe_option('base.a',_print_desc=False))
- self.assertTrue('doc2' in self.cf.describe_option('base.b',_print_desc=False))
+ self.assertTrue(
+ 'doc1' in self.cf.describe_option('base.a', _print_desc=False))
+ self.assertTrue(
+ 'doc2' in self.cf.describe_option('base.b', _print_desc=False))
self.cf.reset_option('base.a')
self.cf.reset_option('base.b')
@@ -286,75 +308,78 @@ def test_config_prefix(self):
self.assertEqual(self.cf.get_option('b'), 2)
def test_callback(self):
- k=[None]
- v=[None]
+ k = [None]
+ v = [None]
+
def callback(key):
k.append(key)
v.append(self.cf.get_option(key))
- self.cf.register_option('d.a', 'foo',cb=callback)
- self.cf.register_option('d.b', 'foo',cb=callback)
+ self.cf.register_option('d.a', 'foo', cb=callback)
+ self.cf.register_option('d.b', 'foo', cb=callback)
- del k[-1],v[-1]
- self.cf.set_option("d.a","fooz")
- self.assertEqual(k[-1],"d.a")
- self.assertEqual(v[-1],"fooz")
+ del k[-1], v[-1]
+ self.cf.set_option("d.a", "fooz")
+ self.assertEqual(k[-1], "d.a")
+ self.assertEqual(v[-1], "fooz")
- del k[-1],v[-1]
- self.cf.set_option("d.b","boo")
- self.assertEqual(k[-1],"d.b")
- self.assertEqual(v[-1],"boo")
+ del k[-1], v[-1]
+ self.cf.set_option("d.b", "boo")
+ self.assertEqual(k[-1], "d.b")
+ self.assertEqual(v[-1], "boo")
- del k[-1],v[-1]
+ del k[-1], v[-1]
self.cf.reset_option("d.b")
- self.assertEqual(k[-1],"d.b")
-
+ self.assertEqual(k[-1], "d.b")
def test_set_ContextManager(self):
def eq(val):
- self.assertEqual(self.cf.get_option("a"),val)
+ self.assertEqual(self.cf.get_option("a"), val)
- self.cf.register_option('a',0)
+ self.cf.register_option('a', 0)
eq(0)
- with self.cf.option_context("a",15):
+ with self.cf.option_context("a", 15):
eq(15)
- with self.cf.option_context("a",25):
+ with self.cf.option_context("a", 25):
eq(25)
eq(15)
eq(0)
- self.cf.set_option("a",17)
+ self.cf.set_option("a", 17)
eq(17)
def test_attribute_access(self):
holder = []
+
def f():
- options.b=1
+ options.b = 1
+
def f2():
- options.display=1
+ options.display = 1
+
def f3(key):
holder.append(True)
- self.cf.register_option('a',0)
- self.cf.register_option('c',0,cb=f3)
- options=self.cf.options
+ self.cf.register_option('a', 0)
+ self.cf.register_option('c', 0, cb=f3)
+ options = self.cf.options
- self.assertEqual(options.a,0)
- with self.cf.option_context("a",15):
- self.assertEqual(options.a,15)
+ self.assertEqual(options.a, 0)
+ with self.cf.option_context("a", 15):
+ self.assertEqual(options.a, 15)
- options.a=500
- self.assertEqual(self.cf.get_option("a"),500)
+ options.a = 500
+ self.assertEqual(self.cf.get_option("a"), 500)
self.cf.reset_option("a")
- self.assertEqual(options.a, self.cf.get_option("a",0))
+ self.assertEqual(options.a, self.cf.get_option("a", 0))
- self.assertRaises(KeyError,f)
- self.assertRaises(KeyError,f2)
+ self.assertRaises(KeyError, f)
+ self.assertRaises(KeyError, f2)
# make sure callback kicks when using this form of setting
options.c = 1
- self.assertEqual(len(holder),1)
+ self.assertEqual(len(holder), 1)
# fmt.reset_printoptions and fmt.set_printoptions were altered
# to use core.config, test_format exercises those paths.
diff --git a/pandas/tests/test_factor.py b/pandas/tests/test_factor.py
index 550225318e43f..de2fcaa94b59d 100644
--- a/pandas/tests/test_factor.py
+++ b/pandas/tests/test_factor.py
@@ -17,9 +17,10 @@
class TestCategorical(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.factor = Categorical.from_array(['a', 'b', 'b', 'a',
- 'a', 'c', 'c', 'c'])
+ 'a', 'c', 'c', 'c'])
def test_getitem(self):
self.assertEqual(self.factor[0], 'a')
@@ -112,6 +113,6 @@ def test_na_flags_int_levels(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
- # '--with-coverage', '--cover-package=pandas.core'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ # '--with-coverage', '--cover-package=pandas.core'],
exit=False)
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 1de2becc709a5..1488f4a1f0f90 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -26,12 +26,15 @@
_frame = DataFrame(tm.getSeriesData())
+
def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
return pth
+
class TestDataFrameFormatting(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.warn_filters = warnings.filters
warnings.filterwarnings('ignore',
@@ -70,7 +73,7 @@ def test_eng_float_formatter(self):
def test_repr_tuples(self):
buf = StringIO()
- df = DataFrame({'tups' : zip(range(10), range(10))})
+ df = DataFrame({'tups': zip(range(10), range(10))})
repr(df)
df.to_string(col_space=10, buf=buf)
@@ -78,8 +81,8 @@ def test_repr_truncation(self):
max_len = 20
with option_context("display.max_colwidth", max_len):
df = DataFrame({'A': np.random.randn(10),
- 'B': [tm.rands(np.random.randint(max_len - 1,
- max_len + 1)) for i in range(10)]})
+ 'B': [tm.rands(np.random.randint(max_len - 1,
+ max_len + 1)) for i in range(10)]})
r = repr(df)
r = r[r.find('\n') + 1:]
@@ -97,7 +100,7 @@ def test_repr_truncation(self):
with option_context("display.max_colwidth", max_len + 2):
self.assert_('...' not in repr(df))
- def test_repr_should_return_str (self):
+ def test_repr_should_return_str(self):
"""
http://docs.python.org/py3k/reference/datamodel.html#object.__repr__
http://docs.python.org/reference/datamodel.html#object.__repr__
@@ -106,24 +109,23 @@ def test_repr_should_return_str (self):
(str on py2.x, str (unicode) on py3)
"""
- data=[8,5,3,5]
- index1=[u"\u03c3",u"\u03c4",u"\u03c5",u"\u03c6"]
- cols=[u"\u03c8"]
- df=DataFrame(data,columns=cols,index=index1)
- self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
+ data = [8, 5, 3, 5]
+ index1 = [u"\u03c3", u"\u03c4", u"\u03c5", u"\u03c6"]
+ cols = [u"\u03c8"]
+ df = DataFrame(data, columns=cols, index=index1)
+ self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
def test_repr_no_backslash(self):
with option_context('mode.sim_interactive', True):
df = DataFrame(np.random.randn(10, 4))
self.assertTrue('\\' not in repr(df))
-
def test_to_string_repr_unicode(self):
buf = StringIO()
unicode_values = [u'\u03c3'] * 10
unicode_values = np.array(unicode_values, dtype=object)
- df = DataFrame({'unicode' : unicode_values})
+ df = DataFrame({'unicode': unicode_values})
df.to_string(col_space=10, buf=buf)
# it works!
@@ -146,7 +148,7 @@ def test_to_string_repr_unicode(self):
sys.stdin = sys.__stdin__
def test_to_string_unicode_columns(self):
- df = DataFrame({u'\u03c3' : np.arange(10.)})
+ df = DataFrame({u'\u03c3': np.arange(10.)})
buf = StringIO()
df.to_string(buf=buf)
@@ -163,7 +165,7 @@ def test_to_string_utf8_columns(self):
n = u"\u05d0".encode('utf-8')
with option_context('display.max_rows', 1):
- df = pd.DataFrame([1,2], columns=[n])
+ df = pd.DataFrame([1, 2], columns=[n])
repr(df)
def test_to_string_unicode_two(self):
@@ -179,8 +181,8 @@ def test_to_string_unicode_three(self):
def test_to_string_with_formatters(self):
df = DataFrame({'int': [1, 2, 3],
'float': [1.0, 2.0, 3.0],
- 'object': [(1,2), True, False]},
- columns=['int', 'float', 'object'])
+ 'object': [(1, 2), True, False]},
+ columns=['int', 'float', 'object'])
formatters = [('int', lambda x: '0x%x' % x),
('float', lambda x: '[% 4.1f]' % x),
@@ -194,18 +196,18 @@ def test_to_string_with_formatters(self):
self.assertEqual(result, result2)
def test_to_string_with_formatters_unicode(self):
- df = DataFrame({u'c/\u03c3':[1,2,3]})
+ df = DataFrame({u'c/\u03c3': [1, 2, 3]})
result = df.to_string(formatters={u'c/\u03c3': lambda x: '%s' % x})
self.assertEqual(result, (u' c/\u03c3\n'
- '0 1\n'
- '1 2\n'
- '2 3'))
+ '0 1\n'
+ '1 2\n'
+ '2 3'))
def test_to_string_buffer_all_unicode(self):
buf = StringIO()
- empty = DataFrame({u'c/\u03c3':Series()})
- nonempty = DataFrame({u'c/\u03c3':Series([1,2,3])})
+ empty = DataFrame({u'c/\u03c3': Series()})
+ nonempty = DataFrame({u'c/\u03c3': Series([1, 2, 3])})
print >>buf, empty
print >>buf, nonempty
@@ -214,34 +216,34 @@ def test_to_string_buffer_all_unicode(self):
buf.getvalue()
def test_to_string_with_col_space(self):
- df = DataFrame(np.random.random(size=(1,3)))
- c10=len(df.to_string(col_space=10).split("\n")[1])
- c20=len(df.to_string(col_space=20).split("\n")[1])
- c30=len(df.to_string(col_space=30).split("\n")[1])
- self.assertTrue( c10 < c20 < c30 )
+ df = DataFrame(np.random.random(size=(1, 3)))
+ c10 = len(df.to_string(col_space=10).split("\n")[1])
+ c20 = len(df.to_string(col_space=20).split("\n")[1])
+ c30 = len(df.to_string(col_space=30).split("\n")[1])
+ self.assertTrue(c10 < c20 < c30)
def test_to_html_with_col_space(self):
- def check_with_width(df,col_space):
+ def check_with_width(df, col_space):
import re
# check that col_space affects HTML generation
# and be very brittle about it.
html = df.to_html(col_space=col_space)
- hdrs = [x for x in html.split("\n") if re.search("<th[>\s]",x)]
- self.assertTrue(len(hdrs) > 0 )
+ hdrs = [x for x in html.split("\n") if re.search("<th[>\s]", x)]
+ self.assertTrue(len(hdrs) > 0)
for h in hdrs:
- self.assertTrue("min-width" in h )
- self.assertTrue(str(col_space) in h )
+ self.assertTrue("min-width" in h)
+ self.assertTrue(str(col_space) in h)
- df = DataFrame(np.random.random(size=(1,3)))
+ df = DataFrame(np.random.random(size=(1, 3)))
- check_with_width(df,30)
- check_with_width(df,50)
+ check_with_width(df, 30)
+ check_with_width(df, 50)
def test_to_html_unicode(self):
# it works!
- df = DataFrame({u'\u03c3' : np.arange(10.)})
+ df = DataFrame({u'\u03c3': np.arange(10.)})
df.to_html()
- df = DataFrame({'A' : [u'\u03c3']})
+ df = DataFrame({'A': [u'\u03c3']})
df.to_html()
def test_to_html_multiindex_sparsify(self):
@@ -386,7 +388,6 @@ def test_to_html_index_formatter(self):
</table>"""
self.assertEquals(result, expected)
-
def test_nonunicode_nonascii_alignment(self):
df = DataFrame([["aa\xc3\xa4\xc3\xa4", 1], ["bbbb", 2]])
rep_str = df.to_string()
@@ -394,19 +395,19 @@ def test_nonunicode_nonascii_alignment(self):
self.assert_(len(lines[1]) == len(lines[2]))
def test_unicode_problem_decoding_as_ascii(self):
- dm = DataFrame({u'c/\u03c3': Series({'test':np.NaN})})
+ dm = DataFrame({u'c/\u03c3': Series({'test': np.NaN})})
unicode(dm.to_string())
def test_string_repr_encoding(self):
pth = curpath()
filepath = os.path.join(pth, 'data', 'unicode_series.csv')
- df = pandas.read_csv(filepath, header=None,encoding='latin1')
+ df = pandas.read_csv(filepath, header=None, encoding='latin1')
repr(df)
repr(df[1])
def test_repr_corner(self):
# representing infs poses no problems
- df = DataFrame({'foo' : np.inf * np.empty(10)})
+ df = DataFrame({'foo': np.inf * np.empty(10)})
foo = repr(df)
def test_frame_info_encoding(self):
@@ -524,10 +525,10 @@ def test_wide_repr_unicode(self):
reset_option('display.expand_frame_repr')
-
def test_wide_repr_wide_long_columns(self):
with option_context('mode.sim_interactive', True):
- df = DataFrame({'a': ['a'*30, 'b'*30], 'b': ['c'*70, 'd'*80]})
+ df = DataFrame(
+ {'a': ['a' * 30, 'b' * 30], 'b': ['c' * 70, 'd' * 80]})
result = repr(df)
self.assertTrue('ccccc' in result)
@@ -538,9 +539,9 @@ def test_to_string(self):
import re
# big mixed
- biggie = DataFrame({'A' : randn(200),
- 'B' : tm.makeStringIndex(200)},
- index=range(200))
+ biggie = DataFrame({'A': randn(200),
+ 'B': tm.makeStringIndex(200)},
+ index=range(200))
biggie['A'][:20] = nan
biggie['B'][:20] = nan
@@ -575,7 +576,7 @@ def test_to_string(self):
self.assertEqual(header, expected)
biggie.to_string(columns=['B', 'A'],
- formatters={'A' : lambda x: '%.1f' % x})
+ formatters={'A': lambda x: '%.1f' % x})
biggie.to_string(columns=['B', 'A'], float_format=str)
biggie.to_string(columns=['B', 'A'], col_space=12,
@@ -585,8 +586,8 @@ def test_to_string(self):
frame.to_string()
def test_to_string_no_header(self):
- df = DataFrame({'x' : [1, 2, 3],
- 'y' : [4, 5, 6]})
+ df = DataFrame({'x': [1, 2, 3],
+ 'y': [4, 5, 6]})
df_s = df.to_string(header=False)
expected = "0 1 4\n1 2 5\n2 3 6"
@@ -594,8 +595,8 @@ def test_to_string_no_header(self):
assert(df_s == expected)
def test_to_string_no_index(self):
- df = DataFrame({'x' : [1, 2, 3],
- 'y' : [4, 5, 6]})
+ df = DataFrame({'x': [1, 2, 3],
+ 'y': [4, 5, 6]})
df_s = df.to_string(index=False)
expected = " x y\n 1 4\n 2 5\n 3 6"
@@ -607,13 +608,13 @@ def test_to_string_float_formatting(self):
fmt.set_printoptions(precision=6, column_space=12,
notebook_repr_html=False)
- df = DataFrame({'x' : [0, 0.25, 3456.000, 12e+45, 1.64e+6,
- 1.7e+8, 1.253456, np.pi, -1e6]})
+ df = DataFrame({'x': [0, 0.25, 3456.000, 12e+45, 1.64e+6,
+ 1.7e+8, 1.253456, np.pi, -1e6]})
df_s = df.to_string()
# Python 2.5 just wants me to be sad. And debian 32-bit
- #sys.version_info[0] == 2 and sys.version_info[1] < 6:
+ # sys.version_info[0] == 2 and sys.version_info[1] < 6:
if _three_digit_exp():
expected = (' x\n0 0.00000e+000\n1 2.50000e-001\n'
'2 3.45600e+003\n3 1.20000e+046\n4 1.64000e+006\n'
@@ -626,7 +627,7 @@ def test_to_string_float_formatting(self):
'8 -1.00000e+06')
assert(df_s == expected)
- df = DataFrame({'x' : [3234, 0.253]})
+ df = DataFrame({'x': [3234, 0.253]})
df_s = df.to_string()
expected = (' x\n'
@@ -640,7 +641,7 @@ def test_to_string_float_formatting(self):
df = DataFrame({'x': [1e9, 0.2512]})
df_s = df.to_string()
# Python 2.5 just wants me to be sad. And debian 32-bit
- #sys.version_info[0] == 2 and sys.version_info[1] < 6:
+ # sys.version_info[0] == 2 and sys.version_info[1] < 6:
if _three_digit_exp():
expected = (' x\n'
'0 1.000000e+009\n'
@@ -701,7 +702,7 @@ def test_to_string_ascii_error(self):
repr(df)
def test_to_string_int_formatting(self):
- df = DataFrame({'x' : [-15, 20, 25, -35]})
+ df = DataFrame({'x': [-15, 20, 25, -35]})
self.assert_(issubclass(df['x'].dtype.type, np.integer))
output = df.to_string()
@@ -727,7 +728,7 @@ def test_to_string_index_formatter(self):
def test_to_string_left_justify_cols(self):
fmt.reset_printoptions()
- df = DataFrame({'x' : [3234, 0.253]})
+ df = DataFrame({'x': [3234, 0.253]})
df_s = df.to_string(justify='left')
expected = (' x \n'
'0 3234.000\n'
@@ -736,8 +737,8 @@ def test_to_string_left_justify_cols(self):
def test_to_string_format_na(self):
fmt.reset_printoptions()
- df = DataFrame({'A' : [np.nan, -1, -2.1234, 3, 4],
- 'B' : [np.nan, 'foo', 'foooo', 'fooooo', 'bar']})
+ df = DataFrame({'A': [np.nan, -1, -2.1234, 3, 4],
+ 'B': [np.nan, 'foo', 'foooo', 'fooooo', 'bar']})
result = df.to_string()
expected = (' A B\n'
@@ -748,8 +749,8 @@ def test_to_string_format_na(self):
'4 4.0000 bar')
self.assertEqual(result, expected)
- df = DataFrame({'A' : [np.nan, -1., -2., 3., 4.],
- 'B' : [np.nan, 'foo', 'foooo', 'fooooo', 'bar']})
+ df = DataFrame({'A': [np.nan, -1., -2., 3., 4.],
+ 'B': [np.nan, 'foo', 'foooo', 'fooooo', 'bar']})
result = df.to_string()
expected = (' A B\n'
@@ -762,9 +763,9 @@ def test_to_string_format_na(self):
def test_to_html(self):
# big mixed
- biggie = DataFrame({'A' : randn(200),
- 'B' : tm.makeStringIndex(200)},
- index=range(200))
+ biggie = DataFrame({'A': randn(200),
+ 'B': tm.makeStringIndex(200)},
+ index=range(200))
biggie['A'][:20] = nan
biggie['B'][:20] = nan
@@ -779,7 +780,7 @@ def test_to_html(self):
biggie.to_html(columns=['B', 'A'], col_space=17)
biggie.to_html(columns=['B', 'A'],
- formatters={'A' : lambda x: '%.1f' % x})
+ formatters={'A': lambda x: '%.1f' % x})
biggie.to_html(columns=['B', 'A'], float_format=str)
biggie.to_html(columns=['B', 'A'], col_space=12,
@@ -958,12 +959,12 @@ def test_to_html_index(self):
'B': [1.2, 3.4, 5.6],
'C': ['one', 'two', np.NaN]},
columns=['A', 'B', 'C'],
- index = index)
+ index=index)
result = df.to_html(index=False)
for i in index:
self.assert_(i not in result)
- tuples = [('foo', 'car'), ('foo', 'bike'), ('bar' ,'car')]
+ tuples = [('foo', 'car'), ('foo', 'bike'), ('bar', 'car')]
df.index = pandas.MultiIndex.from_tuples(tuples)
result = df.to_html(index=False)
for i in ['foo', 'bar', 'car', 'bike']:
@@ -982,9 +983,9 @@ def test_repr_html(self):
def test_fake_qtconsole_repr_html(self):
def get_ipython():
- return {'config' :
- {'KernelApp' :
- {'parent_appname' : 'ipython-qtconsole'}}}
+ return {'config':
+ {'KernelApp':
+ {'parent_appname': 'ipython-qtconsole'}}}
repstr = self.frame._repr_html_()
self.assert_(repstr is not None)
@@ -1027,7 +1028,7 @@ def test_float_trim_zeros(self):
skip = False
def test_dict_entries(self):
- df = DataFrame({'A': [{'a':1, 'b':2}]})
+ df = DataFrame({'A': [{'a': 1, 'b': 2}]})
val = df.to_string()
self.assertTrue("'a': 1" in val)
@@ -1037,8 +1038,10 @@ def test_to_latex(self):
# it works!
self.frame.to_latex()
+
class TestSeriesFormatting(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.ts = tm.makeTimeSeries()
@@ -1126,9 +1129,9 @@ def test_to_string_float_na_spacing(self):
self.assertEqual(result, expected)
def test_unicode_name_in_footer(self):
- s=Series([1,2],name=u'\u05e2\u05d1\u05e8\u05d9\u05ea')
- sf=fmt.SeriesFormatter(s,name=u'\u05e2\u05d1\u05e8\u05d9\u05ea')
- sf._get_footer() # should not raise exception
+ s = Series([1, 2], name=u'\u05e2\u05d1\u05e8\u05d9\u05ea')
+ sf = fmt.SeriesFormatter(s, name=u'\u05e2\u05d1\u05e8\u05d9\u05ea')
+ sf._get_footer() # should not raise exception
def test_float_trim_zeros(self):
vals = [2.08430917305e+10, 3.52205017305e+10, 2.30674817305e+10,
@@ -1141,8 +1144,8 @@ def test_float_trim_zeros(self):
def test_timedelta64(self):
Series(np.array([1100, 20], dtype='timedelta64[s]')).to_string()
- #check this works
- #GH2146
+ # check this works
+ # GH2146
def test_mixed_datetime64(self):
df = DataFrame({'A': [1, 2],
@@ -1152,10 +1155,12 @@ def test_mixed_datetime64(self):
result = repr(df.ix[0])
self.assertTrue('2012-01-01' in result)
+
class TestEngFormatter(unittest.TestCase):
_multiprocess_can_split_ = True
+
def test_eng_float_formatter(self):
- df = DataFrame({'A' : [1.41, 141., 14100, 1410000.]})
+ df = DataFrame({'A': [1.41, 141., 14100, 1410000.]})
fmt.set_eng_float_format()
result = df.to_string()
@@ -1351,9 +1356,11 @@ def test_rounding(self):
result = formatter(0)
self.assertEqual(result, u' 0.000')
+
def _three_digit_exp():
return '%.4g' % 1.7e8 == '1.7e+008'
+
class TestFloatArrayFormatter(unittest.TestCase):
def test_misc(self):
@@ -1362,5 +1369,5 @@ def test_misc(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 81f4055abcf7b..80a55db84f0e8 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -35,6 +35,7 @@
from numpy.testing.decorators import slow
+
def _skip_if_no_scipy():
try:
import scipy.stats
@@ -46,6 +47,7 @@ def _skip_if_no_scipy():
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
class CheckIndexing(object):
_multiprocess_can_split_ = True
@@ -69,7 +71,7 @@ def test_getitem(self):
self.assertRaises(Exception, self.frame.__getitem__, 'random')
def test_getitem_dupe_cols(self):
- df=DataFrame([[1,2,3],[4,5,6]],columns=['a','a','b'])
+ df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b'])
try:
df[['baf']]
except KeyError:
@@ -159,13 +161,14 @@ def test_getitem_boolean(self):
# test df[df >0] works
bif = self.tsframe[self.tsframe > 0]
bifw = DataFrame(np.where(self.tsframe > 0, self.tsframe, np.nan),
- index=self.tsframe.index,columns=self.tsframe.columns)
- self.assert_(isinstance(bif,DataFrame))
+ index=self.tsframe.index, columns=self.tsframe.columns)
+ self.assert_(isinstance(bif, DataFrame))
self.assert_(bif.shape == self.tsframe.shape)
- assert_frame_equal(bif,bifw)
+ assert_frame_equal(bif, bifw)
def test_getitem_boolean_list(self):
- df = DataFrame(np.arange(12).reshape(3,4))
+ df = DataFrame(np.arange(12).reshape(3, 4))
+
def _checkit(lst):
result = df[lst]
expected = df.ix[df.index[lst]]
@@ -225,7 +228,7 @@ def test_getitem_setitem_ix_negative_integers(self):
self.assert_(isnull(df.ix[:, [-1]].values).all())
# #1942
- a = DataFrame(randn(20,2), index=[chr(x+65) for x in range(20)])
+ a = DataFrame(randn(20, 2), index=[chr(x + 65) for x in range(20)])
a.ix[-1] = a.ix[-2]
assert_series_equal(a.ix[-1], a.ix[-2])
@@ -236,7 +239,7 @@ def test_getattr(self):
'NONEXISTENT_NAME')
def test_setattr_column(self):
- df = DataFrame({'foobar' : 1}, index=range(10))
+ df = DataFrame({'foobar': 1}, index=range(10))
df.foobar = 5
self.assert_((df.foobar == 5).all())
@@ -247,12 +250,12 @@ def test_setitem(self):
self.frame['col5'] = series
self.assert_('col5' in self.frame)
tm.assert_dict_equal(series, self.frame['col5'],
- compare_keys=False)
+ compare_keys=False)
series = self.frame['A']
self.frame['col6'] = series
tm.assert_dict_equal(series, self.frame['col6'],
- compare_keys=False)
+ compare_keys=False)
self.assertRaises(Exception, self.frame.__setitem__,
randn(len(self.frame) + 1))
@@ -357,9 +360,9 @@ def test_setitem_boolean_column(self):
def test_setitem_corner(self):
# corner case
- df = DataFrame({'B' : [1., 2., 3.],
- 'C' : ['a', 'b', 'c']},
- index=np.arange(3))
+ df = DataFrame({'B': [1., 2., 3.],
+ 'C': ['a', 'b', 'c']},
+ index=np.arange(3))
del df['B']
df['B'] = [1., 2., 3.]
self.assert_('B' in df)
@@ -396,8 +399,8 @@ def test_setitem_corner(self):
self.assertEqual(dm['coercable'].dtype, np.object_)
def test_setitem_corner2(self):
- data = {"title" : ['foobar','bar','foobar'] + ['foobar'] * 17 ,
- "cruft" : np.random.random(20)}
+ data = {"title": ['foobar', 'bar', 'foobar'] + ['foobar'] * 17,
+ "cruft": np.random.random(20)}
df = DataFrame(data)
ix = df[df['title'] == 'bar'].index
@@ -405,8 +408,8 @@ def test_setitem_corner2(self):
df.ix[ix, ['title']] = 'foobar'
df.ix[ix, ['cruft']] = 0
- assert( df.ix[1, 'title'] == 'foobar' )
- assert( df.ix[1, 'cruft'] == 0 )
+ assert(df.ix[1, 'title'] == 'foobar')
+ assert(df.ix[1, 'cruft'] == 0)
def test_setitem_ambig(self):
# difficulties with mixed-type data
@@ -435,7 +438,7 @@ def test_setitem_ambig(self):
def test_setitem_clear_caches(self):
# GH #304
df = DataFrame({'x': [1.1, 2.1, 3.1, 4.1], 'y': [5.1, 6.1, 7.1, 8.1]},
- index=[0,1,2,3])
+ index=[0, 1, 2, 3])
df.insert(2, 'z', np.nan)
# cache it
@@ -628,15 +631,14 @@ def test_setitem_fancy_2d(self):
assert_frame_equal(frame, expected)
# new corner case of boolean slicing / setting
- frame = DataFrame(zip([2,3,9,6,7], [np.nan]*5),
- columns=['a','b'])
+ frame = DataFrame(zip([2, 3, 9, 6, 7], [np.nan] * 5),
+ columns=['a', 'b'])
lst = [100]
- lst.extend([np.nan]*4)
- expected = DataFrame(zip([100,3,9,6,7], lst), columns=['a','b'])
+ lst.extend([np.nan] * 4)
+ expected = DataFrame(zip([100, 3, 9, 6, 7], lst), columns=['a', 'b'])
frame[frame['a'] == 2] = 100
assert_frame_equal(frame, expected)
-
def test_fancy_getitem_slice_mixed(self):
sliced = self.mixed_frame.ix[:, -3:]
self.assert_(sliced['D'].dtype == np.float64)
@@ -804,7 +806,7 @@ def test_ix_assign_column_mixed(self):
def test_ix_multi_take(self):
df = DataFrame(np.random.randn(3, 2))
- rs = df.ix[df.index==0, :]
+ rs = df.ix[df.index == 0, :]
xp = df.reindex([0])
assert_frame_equal(rs, xp)
@@ -816,15 +818,15 @@ def test_ix_multi_take(self):
"""
def test_ix_multi_take_nonint_index(self):
- df = DataFrame(np.random.randn(3, 2), index=['x','y','z'],
- columns=['a','b'])
+ df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'],
+ columns=['a', 'b'])
rs = df.ix[[0], [0]]
xp = df.reindex(['x'], columns=['a'])
assert_frame_equal(rs, xp)
def test_ix_multi_take_multiindex(self):
- df = DataFrame(np.random.randn(3, 2), index=['x','y','z'],
- columns=[['a','b'], ['1','2']])
+ df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'],
+ columns=[['a', 'b'], ['1', '2']])
rs = df.ix[[0], [0]]
xp = df.reindex(['x'], columns=[('a', '1')])
assert_frame_equal(rs, xp)
@@ -842,7 +844,6 @@ def test_ix_dup(self):
sub = df.ix['b':'d']
assert_frame_equal(sub, df.ix[2:])
-
def test_getitem_fancy_1d(self):
f = self.frame
ix = f.ix
@@ -953,7 +954,7 @@ def test_setitem_fancy_scalar(self):
for idx in f.index[::5]:
i = f.index.get_loc(idx)
val = randn()
- expected.values[i,j] = val
+ expected.values[i, j] = val
ix[idx, col] = val
assert_frame_equal(f, expected)
@@ -998,8 +999,8 @@ def test_setitem_fancy_boolean(self):
assert_frame_equal(frame, expected)
def test_getitem_fancy_ints(self):
- result = self.frame.ix[[1,4,7]]
- expected = self.frame.ix[self.frame.index[[1,4,7]]]
+ result = self.frame.ix[[1, 4, 7]]
+ expected = self.frame.ix[self.frame.index[[1, 4, 7]]]
assert_frame_equal(result, expected)
result = self.frame.ix[:, [2, 0, 1]]
@@ -1081,33 +1082,35 @@ def test_setitem_single_column_mixed_datetime(self):
# check our dtypes
result = df.get_dtype_counts()
- expected = Series({ 'float64' : 3, 'datetime64[ns]' : 1})
+ expected = Series({'float64': 3, 'datetime64[ns]': 1})
assert_series_equal(result, expected)
# set an allowable datetime64 type
from pandas import tslib
- df.ix['b','timestamp'] = tslib.iNaT
- self.assert_(com.isnull(df.ix['b','timestamp']))
+ df.ix['b', 'timestamp'] = tslib.iNaT
+ self.assert_(com.isnull(df.ix['b', 'timestamp']))
# allow this syntax
- df.ix['c','timestamp'] = nan
- self.assert_(com.isnull(df.ix['c','timestamp']))
+ df.ix['c', 'timestamp'] = nan
+ self.assert_(com.isnull(df.ix['c', 'timestamp']))
# allow this syntax
- df.ix['d',:] = nan
- self.assert_(com.isnull(df.ix['c',:]).all() == False)
+ df.ix['d', :] = nan
+ self.assert_(com.isnull(df.ix['c', :]).all() == False)
# try to set with a list like item
- self.assertRaises(Exception, df.ix.__setitem__, ('d','timestamp'), [nan])
+ self.assertRaises(
+ Exception, df.ix.__setitem__, ('d', 'timestamp'), [nan])
# prior to 0.10.1 this failed
- #self.assertRaises(TypeError, df.ix.__setitem__, ('c','timestamp'), nan)
+ # self.assertRaises(TypeError, df.ix.__setitem__, ('c','timestamp'),
+ # nan)
def test_setitem_frame(self):
piece = self.frame.ix[:2, ['A', 'B']]
self.frame.ix[-2:, ['A', 'B']] = piece.values
assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values,
- piece.values)
+ piece.values)
piece = self.mixed_frame.ix[:2, ['A', 'B']]
f = self.mixed_frame.ix.__setitem__
@@ -1120,7 +1123,7 @@ def test_setitem_frame_align(self):
piece.columns = ['A', 'B']
self.frame.ix[-2:, ['A', 'B']] = piece
assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values,
- piece.values)
+ piece.values)
def test_setitem_fancy_exceptions(self):
pass
@@ -1177,7 +1180,7 @@ def test_getitem_setitem_ix_bool_keyerror(self):
def test_getitem_list_duplicates(self):
# #1943
- df = DataFrame(np.random.randn(4,4), columns=list('AABC'))
+ df = DataFrame(np.random.randn(4, 4), columns=list('AABC'))
df.columns.name = 'foo'
result = df[['B', 'C']]
@@ -1194,9 +1197,9 @@ def test_get_value(self):
assert_almost_equal(result, expected)
def test_iteritems(self):
- df=DataFrame([[1,2,3],[4,5,6]],columns=['a','a','b'])
- for k,v in df.iteritems():
- self.assertEqual(type(v),Series)
+ df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b'])
+ for k, v in df.iteritems():
+ self.assertEqual(type(v), Series)
def test_lookup(self):
def alt(df, rows, cols):
@@ -1215,10 +1218,10 @@ def testit(df):
testit(self.mixed_frame)
testit(self.frame)
- df = DataFrame({'label' : ['a', 'b', 'a', 'c'],
- 'mask_a' : [True, True, False, True],
- 'mask_b' : [True, False, False, False],
- 'mask_c' : [False, True, False, True]})
+ df = DataFrame({'label': ['a', 'b', 'a', 'c'],
+ 'mask_a': [True, True, False, True],
+ 'mask_b': [True, False, False, False],
+ 'mask_c': [False, True, False, True]})
df['mask'] = df.lookup(df.index, 'mask_' + df['label'])
exp_mask = alt(df, df.index, 'mask_' + df['label'])
assert_almost_equal(df['mask'], exp_mask)
@@ -1260,7 +1263,7 @@ def test_set_value_resize(self):
self.assertRaises(ValueError, res3.set_value, 'foobar', 'baz', 'sam')
def test_set_value_with_index_dtype_change(self):
- df = DataFrame(randn(3,3), index=range(3), columns=list('ABC'))
+ df = DataFrame(randn(3, 3), index=range(3), columns=list('ABC'))
res = df.set_value('C', 2, 1.0)
self.assert_(list(res.index) == list(df.index) + ['C'])
self.assert_(list(res.columns) == list(df.columns) + [2])
@@ -1333,7 +1336,7 @@ def test_icol(self):
assert_frame_equal(result, expected)
def test_irow_icol_duplicates(self):
- df = DataFrame(np.random.rand(3,3), columns=list('ABC'),
+ df = DataFrame(np.random.rand(3, 3), columns=list('ABC'),
index=list('aab'))
result = df.irow(0)
@@ -1348,10 +1351,10 @@ def test_irow_icol_duplicates(self):
assert_almost_equal(result.values, df.values[0])
assert_series_equal(result, result2)
- #multiindex
+ # multiindex
df = DataFrame(np.random.randn(3, 3), columns=[['i', 'i', 'j'],
['A', 'A', 'B']],
- index = [['i', 'i', 'j'], ['X', 'X', 'Y']])
+ index=[['i', 'i', 'j'], ['X', 'X', 'Y']])
rs = df.irow(0)
xp = df.ix[0]
assert_series_equal(rs, xp)
@@ -1365,15 +1368,15 @@ def test_irow_icol_duplicates(self):
assert_frame_equal(rs, xp)
# #2259
- df = DataFrame([[1,2,3],[4,5,6]], columns=[1,1,2])
+ df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[1, 1, 2])
result = df.icol([0])
expected = df.take([0], axis=1)
assert_frame_equal(result, expected)
def test_icol_sparse_propegate_fill_value(self):
from pandas.sparse.api import SparseDataFrame
- df=SparseDataFrame({'A' : [999,1]},default_fill_value=999)
- self.assertTrue( len(df['A'].sp_values) == len(df.icol(0).sp_values))
+ df = SparseDataFrame({'A': [999, 1]}, default_fill_value=999)
+ self.assertTrue(len(df['A'].sp_values) == len(df.icol(0).sp_values))
def test_iget_value(self):
for i, row in enumerate(self.frame.index):
@@ -1387,15 +1390,16 @@ def test_nested_exception(self):
# (which may get fixed), it's just a way to trigger
# the issue or reraising an outer exception without
# a named argument
- df=DataFrame({"a":[1,2,3],"b":[4,5,6],"c":[7,8,9]}).set_index(["a","b"])
- l=list(df.index)
- l[0]=["a","b"]
- df.index=l
+ df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8,
+ 9]}).set_index(["a", "b"])
+ l = list(df.index)
+ l[0] = ["a", "b"]
+ df.index = l
try:
repr(df)
- except Exception,e:
- self.assertNotEqual(type(e),UnboundLocalError)
+ except Exception, e:
+ self.assertNotEqual(type(e), UnboundLocalError)
_seriesd = tm.getSeriesData()
_tsd = tm.getTimeSeriesData()
@@ -1410,6 +1414,7 @@ def test_nested_exception(self):
_mixed_frame = _frame.copy()
_mixed_frame['foo'] = 'bar'
+
class SafeForSparse(object):
_multiprocess_can_split_ = True
@@ -1552,10 +1557,10 @@ def setUp(self):
self.ts4 = tm.makeTimeSeries()[1:-1]
self.ts_dict = {
- 'col1' : self.ts1,
- 'col2' : self.ts2,
- 'col3' : self.ts3,
- 'col4' : self.ts4,
+ 'col1': self.ts1,
+ 'col2': self.ts2,
+ 'col3': self.ts3,
+ 'col4': self.ts4,
}
self.empty = DataFrame({})
@@ -1564,7 +1569,7 @@ def setUp(self):
[7., 8., 9.]])
self.simple = DataFrame(arr, columns=['one', 'two', 'three'],
- index=['a', 'b', 'c'])
+ index=['a', 'b', 'c'])
def test_get_axis(self):
self.assert_(DataFrame._get_axis_name(0) == 'index')
@@ -1590,16 +1595,16 @@ def test_set_index(self):
# cache it
_ = self.mixed_frame['foo']
self.mixed_frame.index = idx
- self.assert_(self.mixed_frame['foo'].index is idx)
+ self.assert_(self.mixed_frame['foo'].index is idx)
self.assertRaises(Exception, setattr, self.mixed_frame, 'index',
idx[::2])
def test_set_index2(self):
- df = DataFrame({'A' : ['foo', 'foo', 'foo', 'bar', 'bar'],
- 'B' : ['one', 'two', 'three', 'one', 'two'],
- 'C' : ['a', 'b', 'c', 'd', 'e'],
- 'D' : np.random.randn(5),
- 'E' : np.random.randn(5)})
+ df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'],
+ 'B': ['one', 'two', 'three', 'one', 'two'],
+ 'C': ['a', 'b', 'c', 'd', 'e'],
+ 'D': np.random.randn(5),
+ 'E': np.random.randn(5)})
# new object, single-column
result = df.set_index('C')
@@ -1676,31 +1681,31 @@ def test_set_index2(self):
self.assertEqual(result.index.name, 'C')
def test_set_index_nonuniq(self):
- df = DataFrame({'A' : ['foo', 'foo', 'foo', 'bar', 'bar'],
- 'B' : ['one', 'two', 'three', 'one', 'two'],
- 'C' : ['a', 'b', 'c', 'd', 'e'],
- 'D' : np.random.randn(5),
- 'E' : np.random.randn(5)})
+ df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'],
+ 'B': ['one', 'two', 'three', 'one', 'two'],
+ 'C': ['a', 'b', 'c', 'd', 'e'],
+ 'D': np.random.randn(5),
+ 'E': np.random.randn(5)})
self.assertRaises(Exception, df.set_index, 'A', verify_integrity=True,
inplace=True)
self.assert_('A' in df)
def test_set_index_bug(self):
- #GH1590
- df = DataFrame({'val' : [0, 1, 2], 'key': ['a', 'b', 'c']})
- df2 = df.select(lambda indx:indx>=1)
+ # GH1590
+ df = DataFrame({'val': [0, 1, 2], 'key': ['a', 'b', 'c']})
+ df2 = df.select(lambda indx: indx >= 1)
rs = df2.set_index('key')
xp = DataFrame({'val': [1, 2]},
Index(['b', 'c'], name='key'))
assert_frame_equal(rs, xp)
def test_set_index_pass_arrays(self):
- df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'foo', 'foo'],
- 'B' : ['one', 'one', 'two', 'three',
- 'two', 'two', 'one', 'three'],
- 'C' : np.random.randn(8),
- 'D' : np.random.randn(8)})
+ df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'three',
+ 'two', 'two', 'one', 'three'],
+ 'C': np.random.randn(8),
+ 'D': np.random.randn(8)})
# multiple columns
result = df.set_index(['A', df['B'].values], drop=False)
@@ -1708,9 +1713,9 @@ def test_set_index_pass_arrays(self):
assert_frame_equal(result, expected)
def test_set_index_cast_datetimeindex(self):
- df = DataFrame({'A' : [datetime(2000, 1, 1) + timedelta(i)
- for i in range(1000)],
- 'B' : np.random.randn(1000)})
+ df = DataFrame({'A': [datetime(2000, 1, 1) + timedelta(i)
+ for i in range(1000)],
+ 'B': np.random.randn(1000)})
idf = df.set_index('A')
self.assert_(isinstance(idf.index, DatetimeIndex))
@@ -1727,11 +1732,11 @@ def test_set_index_multiindexcolumns(self):
def test_set_index_empty_column(self):
# #1971
df = DataFrame([
- dict(a=1, p=0),
- dict(a=2, m=10),
- dict(a=3, m=11, p=20),
- dict(a=4, m=12, p=21)
- ], columns=('a', 'm', 'p', 'x'))
+ dict(a=1, p=0),
+ dict(a=2, m=10),
+ dict(a=3, m=11, p=20),
+ dict(a=4, m=12, p=21)
+ ], columns=('a', 'm', 'p', 'x'))
# it works!
result = df.set_index(['a', 'x'])
@@ -1809,12 +1814,12 @@ def test_constructor_rec(self):
assert_frame_equal(df3, expected)
def test_constructor_bool(self):
- df = DataFrame({0 : np.ones(10, dtype=bool),
- 1 : np.zeros(10, dtype=bool)})
+ df = DataFrame({0: np.ones(10, dtype=bool),
+ 1: np.zeros(10, dtype=bool)})
self.assertEqual(df.values.dtype, np.bool_)
def test_constructor_overflow_int64(self):
- values = np.array([2**64 - i for i in range(1, 10)],
+ values = np.array([2 ** 64 - i for i in range(1, 10)],
dtype=np.uint64)
result = DataFrame({'a': values})
@@ -1825,12 +1830,11 @@ def test_constructor_overflow_int64(self):
(8921811264899370420, 45), (17019687244989530680L, 270),
(9930107427299601010L, 273)]
dtype = [('uid', 'u8'), ('score', 'u8')]
- data = np.zeros((len(data_scores),),dtype=dtype)
+ data = np.zeros((len(data_scores),), dtype=dtype)
data[:] = data_scores
df_crawls = DataFrame(data)
self.assert_(df_crawls['uid'].dtype == object)
-
def test_is_mixed_type(self):
self.assert_(not self.frame._is_mixed_type)
self.assert_(self.mixed_frame._is_mixed_type)
@@ -1840,20 +1844,20 @@ def test_constructor_ordereddict(self):
nitems = 100
nums = range(nitems)
random.shuffle(nums)
- expected=['A%d' %i for i in nums]
- df=DataFrame(OrderedDict(zip(expected,[[0]]*nitems)))
- self.assertEqual(expected,list(df.columns))
+ expected = ['A%d' % i for i in nums]
+ df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems)))
+ self.assertEqual(expected, list(df.columns))
def test_constructor_dict(self):
- frame = DataFrame({'col1' : self.ts1,
- 'col2' : self.ts2})
+ frame = DataFrame({'col1': self.ts1,
+ 'col2': self.ts2})
tm.assert_dict_equal(self.ts1, frame['col1'], compare_keys=False)
tm.assert_dict_equal(self.ts2, frame['col2'], compare_keys=False)
- frame = DataFrame({'col1' : self.ts1,
- 'col2' : self.ts2},
- columns=['col2', 'col3', 'col4'])
+ frame = DataFrame({'col1': self.ts1,
+ 'col2': self.ts2},
+ columns=['col2', 'col3', 'col4'])
self.assertEqual(len(frame), len(self.ts2))
self.assert_('col1' not in frame)
@@ -1865,12 +1869,11 @@ def test_constructor_dict(self):
# mix dict and array, wrong size
self.assertRaises(Exception, DataFrame,
- {'A' : {'a' : 'a', 'b' : 'b'},
- 'B' : ['a', 'b', 'c']})
-
+ {'A': {'a': 'a', 'b': 'b'},
+ 'B': ['a', 'b', 'c']})
# Length-one dict micro-optimization
- frame = DataFrame({'A' : {'1' : 1, '2' : 2}})
+ frame = DataFrame({'A': {'1': 1, '2': 2}})
self.assert_(np.array_equal(frame.index, ['1', '2']))
# empty dict plus index
@@ -1886,7 +1889,7 @@ def test_constructor_dict(self):
self.assertEqual(len(frame._series), 3)
# with dict of empty list and Series
- frame = DataFrame({'A' : [], 'B' : []}, columns=['A', 'B'])
+ frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B'])
self.assert_(frame.index.equals(Index([])))
def test_constructor_subclass_dict(self):
@@ -1915,15 +1918,15 @@ def test_constructor_subclass_dict(self):
def test_constructor_dict_block(self):
expected = [[4., 3., 2., 1.]]
- df = DataFrame({'d' : [4.],'c' : [3.],'b' : [2.],'a' : [1.]},
+ df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]},
columns=['d', 'c', 'b', 'a'])
assert_almost_equal(df.values, expected)
def test_constructor_dict_cast(self):
# cast float tests
test_data = {
- 'A' : {'1' : 1, '2' : 2},
- 'B' : {'1' : '1', '2' : '2', '3' : '3'},
+ 'A': {'1': 1, '2': 2},
+ 'B': {'1': '1', '2': '2', '3': '3'},
}
frame = DataFrame(test_data, dtype=float)
self.assertEqual(len(frame), 3)
@@ -1937,8 +1940,8 @@ def test_constructor_dict_cast(self):
# can't cast to float
test_data = {
- 'A' : dict(zip(range(20), tm.makeStringIndex(20))),
- 'B' : dict(zip(range(15), randn(15)))
+ 'A': dict(zip(range(20), tm.makeStringIndex(20))),
+ 'B': dict(zip(range(15), randn(15)))
}
frame = DataFrame(test_data, dtype=float)
self.assertEqual(len(frame), 20)
@@ -1950,7 +1953,7 @@ def test_constructor_dict_dont_upcast(self):
df = DataFrame(d)
self.assert_(isinstance(df['Col1']['Row2'], float))
- dm = DataFrame([[1,2],['a','b']], index=[1,2], columns=[1,2])
+ dm = DataFrame([[1, 2], ['a', 'b']], index=[1, 2], columns=[1, 2])
self.assert_(isinstance(dm[1][1], int))
def test_constructor_dict_of_tuples(self):
@@ -1972,7 +1975,7 @@ def test_constructor_ndarray(self):
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
- index=[1, 2], dtype=int)
+ index=[1, 2], dtype=int)
self.assert_(frame.values.dtype == np.int64)
# 1-D input
@@ -2024,13 +2027,13 @@ def test_constructor_maskedarray(self):
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
- index=[1, 2], dtype=int)
+ index=[1, 2], dtype=int)
self.assert_(frame.values.dtype == np.int64)
# Check non-masked values
mat2 = ma.copy(mat)
- mat2[0,0] = 1.0
- mat2[1,2] = 2.0
+ mat2[0, 0] = 1.0
+ mat2[1, 2] = 2.0
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
self.assertEqual(1.0, frame['A'][1])
self.assertEqual(2.0, frame['C'][2])
@@ -2087,8 +2090,8 @@ def test_constructor_maskedarray_nonfloat(self):
# Check non-masked values
mat2 = ma.copy(mat)
- mat2[0,0] = 1
- mat2[1,2] = 2
+ mat2[0, 0] = 1
+ mat2[1, 2] = 2
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
self.assertEqual(1, frame['A'][1])
self.assertEqual(2, frame['C'][2])
@@ -2104,13 +2107,13 @@ def test_constructor_maskedarray_nonfloat(self):
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
- index=[1, 2], dtype=np.int64)
+ index=[1, 2], dtype=np.int64)
self.assert_(frame.values.dtype == np.int64)
# Check non-masked values
mat2 = ma.copy(mat)
- mat2[0,0] = 1
- mat2[1,2] = 2
+ mat2[0, 0] = 1
+ mat2[1, 2] = 2
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
self.assertEqual(1, frame['A'].view('i8')[1])
self.assertEqual(2, frame['C'].view('i8')[2])
@@ -2126,13 +2129,13 @@ def test_constructor_maskedarray_nonfloat(self):
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
- index=[1, 2], dtype=object)
+ index=[1, 2], dtype=object)
self.assert_(frame.values.dtype == object)
# Check non-masked values
mat2 = ma.copy(mat)
- mat2[0,0] = True
- mat2[1,2] = False
+ mat2[0, 0] = True
+ mat2[1, 2] = False
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
self.assertEqual(True, frame['A'][1])
self.assertEqual(False, frame['C'][2])
@@ -2142,11 +2145,11 @@ def test_constructor_corner(self):
self.assertEqual(df.values.shape, (0, 0))
# empty but with specified dtype
- df = DataFrame(index=range(10), columns=['a','b'], dtype=object)
+ df = DataFrame(index=range(10), columns=['a', 'b'], dtype=object)
self.assert_(df.values.dtype == np.object_)
# does not error but ends up float
- df = DataFrame(index=range(10), columns=['a','b'], dtype=int)
+ df = DataFrame(index=range(10), columns=['a', 'b'], dtype=int)
self.assert_(df.values.dtype == np.object_)
# #1783 empty dtype object
@@ -2154,8 +2157,8 @@ def test_constructor_corner(self):
self.assert_(df.values.dtype == np.object_)
def test_constructor_scalar_inference(self):
- data = {'int' : 1, 'bool' : True,
- 'float' : 3., 'complex': 4j, 'object' : 'foo'}
+ data = {'int': 1, 'bool': True,
+ 'float': 3., 'complex': 4j, 'object': 'foo'}
df = DataFrame(data, index=np.arange(10))
self.assert_(df['int'].dtype == np.int64)
@@ -2212,9 +2215,9 @@ def test_constructor_more(self):
tm.assert_frame_equal(dm, self.frame)
# int cast
- dm = DataFrame({'A' : np.ones(10, dtype=int),
- 'B' : np.ones(10, dtype=float)},
- index=np.arange(10))
+ dm = DataFrame({'A': np.ones(10, dtype=int),
+ 'B': np.ones(10, dtype=float)},
+ index=np.arange(10))
self.assertEqual(len(dm.columns), 2)
self.assert_(dm.values.dtype == np.float64)
@@ -2232,12 +2235,12 @@ def test_constructor_list_of_lists(self):
self.assert_(df['str'].dtype == np.object_)
def test_constructor_list_of_dicts(self):
- data = [OrderedDict([['a', 1.5], ['b', 3], ['c',4], ['d',6]]),
- OrderedDict([['a', 1.5], ['b', 3], ['d',6]]),
- OrderedDict([['a', 1.5],[ 'd',6]]),
+ data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]),
+ OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]),
+ OrderedDict([['a', 1.5], ['d', 6]]),
OrderedDict(),
- OrderedDict([['a', 1.5], ['b', 3],[ 'c',4]]),
- OrderedDict([['b', 3], ['c',4], ['d',6]])]
+ OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]),
+ OrderedDict([['b', 3], ['c', 4], ['d', 6]])]
result = DataFrame(data)
expected = DataFrame.from_dict(dict(zip(range(len(data)), data)),
@@ -2249,8 +2252,8 @@ def test_constructor_list_of_dicts(self):
assert_frame_equal(result, expected)
def test_constructor_list_of_series(self):
- data = [OrderedDict([['a', 1.5],[ 'b', 3.0],[ 'c',4.0]]),
- OrderedDict([['a', 1.5], ['b', 3.0],[ 'c',6.0]])]
+ data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]),
+ OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])]
sdict = OrderedDict(zip(['x', 'y'], data))
idx = Index(['a', 'b', 'c'])
@@ -2271,12 +2274,12 @@ def test_constructor_list_of_series(self):
assert_frame_equal(result.sort_index(), expected)
# none named
- data = [OrderedDict([['a', 1.5], ['b', 3], ['c',4], ['d',6]]),
- OrderedDict([['a', 1.5], ['b', 3], ['d',6]]),
- OrderedDict([['a', 1.5],[ 'd',6]]),
+ data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]),
+ OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]),
+ OrderedDict([['a', 1.5], ['d', 6]]),
OrderedDict(),
- OrderedDict([['a', 1.5], ['b', 3],[ 'c',4]]),
- OrderedDict([['b', 3], ['c',4], ['d',6]])]
+ OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]),
+ OrderedDict([['b', 3], ['c', 4], ['d', 6]])]
data = [Series(d) for d in data]
result = DataFrame(data)
@@ -2291,8 +2294,8 @@ def test_constructor_list_of_series(self):
expected = DataFrame(index=[0])
assert_frame_equal(result, expected)
- data = [OrderedDict([['a', 1.5],[ 'b', 3.0],[ 'c',4.0]]),
- OrderedDict([['a', 1.5], ['b', 3.0],[ 'c',6.0]])]
+ data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]),
+ OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])]
sdict = OrderedDict(zip(range(len(data)), data))
idx = Index(['a', 'b', 'c'])
@@ -2315,14 +2318,14 @@ class CustomDict(dict):
assert_frame_equal(result, result_custom)
def test_constructor_ragged(self):
- data = {'A' : randn(10),
- 'B' : randn(8)}
+ data = {'A': randn(10),
+ 'B': randn(8)}
self.assertRaises(Exception, DataFrame, data)
def test_constructor_scalar(self):
idx = Index(range(3))
- df = DataFrame({"a" : 0}, index=idx)
- expected = DataFrame({"a" : [0, 0, 0]}, index=idx)
+ df = DataFrame({"a": 0}, index=idx)
+ expected = DataFrame({"a": [0, 0, 0]}, index=idx)
assert_frame_equal(df, expected)
def test_constructor_Series_copy_bug(self):
@@ -2331,7 +2334,7 @@ def test_constructor_Series_copy_bug(self):
def test_constructor_mixed_dict_and_Series(self):
data = {}
- data['A'] = {'foo' : 1, 'bar' : 2, 'baz' : 3}
+ data['A'] = {'foo': 1, 'bar': 2, 'baz': 3}
data['B'] = Series([4, 3, 2, 1], index=['bar', 'qux', 'baz', 'foo'])
result = DataFrame(data)
@@ -2339,12 +2342,12 @@ def test_constructor_mixed_dict_and_Series(self):
# ordering ambiguous, raise exception
self.assertRaises(Exception, DataFrame,
- {'A' : ['a', 'b'], 'B' : {'a' : 'a', 'b' : 'b'}})
+ {'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}})
# this is OK though
- result = DataFrame({'A' : ['a', 'b'],
- 'B' : Series(['a', 'b'], index=['a', 'b'])})
- expected = DataFrame({'A' : ['a', 'b'], 'B' : ['a', 'b']},
+ result = DataFrame({'A': ['a', 'b'],
+ 'B': Series(['a', 'b'], index=['a', 'b'])})
+ expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']},
index=['a', 'b'])
assert_frame_equal(result, expected)
@@ -2379,10 +2382,10 @@ def test_constructor_Series_named(self):
def test_constructor_Series_differently_indexed(self):
# name
- s1 = Series([1, 2, 3], index=['a','b','c'], name='x')
+ s1 = Series([1, 2, 3], index=['a', 'b', 'c'], name='x')
# no name
- s2 = Series([1, 2, 3], index=['a','b','c'])
+ s2 = Series([1, 2, 3], index=['a', 'b', 'c'])
other_index = Index(['a', 'b'])
@@ -2430,7 +2433,8 @@ def test_constructor_from_items(self):
orient='index')
# orient='index', but thar be tuples
- arr = lib.list_to_object_array([('bar', 'baz')] * len(self.mixed_frame))
+ arr = lib.list_to_object_array(
+ [('bar', 'baz')] * len(self.mixed_frame))
self.mixed_frame['foo'] = arr
row_items = [(idx, list(self.mixed_frame.xs(idx)))
for idx in self.mixed_frame.index]
@@ -2441,20 +2445,19 @@ def test_constructor_from_items(self):
self.assert_(isinstance(recons['foo'][0], tuple))
rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])],
- orient='index', columns=['one', 'two', 'three'])
+ orient='index', columns=['one', 'two', 'three'])
xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'],
columns=['one', 'two', 'three'])
assert_frame_equal(rs, xp)
-
def test_constructor_mix_series_nonseries(self):
- df = DataFrame({'A' : self.frame['A'],
- 'B' : list(self.frame['B'])}, columns=['A', 'B'])
+ df = DataFrame({'A': self.frame['A'],
+ 'B': list(self.frame['B'])}, columns=['A', 'B'])
assert_frame_equal(df, self.frame.ix[:, ['A', 'B']])
self.assertRaises(ValueError, DataFrame,
- {'A' : self.frame['A'],
- 'B' : list(self.frame['B'])[:-2]})
+ {'A': self.frame['A'],
+ 'B': list(self.frame['B'])[:-2]})
def test_constructor_miscast_na_int_dtype(self):
df = DataFrame([[np.nan, 1], [1, 0]], dtype=np.int64)
@@ -2469,28 +2472,30 @@ def test_constructor_column_duplicates(self):
assert_frame_equal(df, edf)
- idf = DataFrame.from_items([('a',[8]),('a',[5])], columns=['a','a'])
+ idf = DataFrame.from_items(
+ [('a', [8]), ('a', [5])], columns=['a', 'a'])
assert_frame_equal(idf, edf)
self.assertRaises(ValueError, DataFrame.from_items,
- [('a',[8]),('a',[5]), ('b', [6])],
- columns=['b', 'a','a'])
+ [('a', [8]), ('a', [5]), ('b', [6])],
+ columns=['b', 'a', 'a'])
def test_constructor_single_value(self):
- df = DataFrame(0., index=[1,2,3], columns=['a','b','c'])
+ df = DataFrame(0., index=[1, 2, 3], columns=['a', 'b', 'c'])
assert_frame_equal(df, DataFrame(np.zeros(df.shape), df.index,
df.columns))
- df = DataFrame('a', index=[1,2], columns=['a', 'c'])
+ df = DataFrame('a', index=[1, 2], columns=['a', 'c'])
assert_frame_equal(df, DataFrame(np.array([['a', 'a'],
['a', 'a']],
dtype=object),
- index=[1,2],
+ index=[1, 2],
columns=['a', 'c']))
- self.assertRaises(com.PandasError, DataFrame, 'a', [1,2])
- self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a' ,'c'])
- self.assertRaises(com.PandasError, DataFrame, 'a', [1,2], ['a', 'c'], float)
+ self.assertRaises(com.PandasError, DataFrame, 'a', [1, 2])
+ self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a', 'c'])
+ self.assertRaises(
+ com.PandasError, DataFrame, 'a', [1, 2], ['a', 'c'], float)
def test_new_empty_index(self):
df1 = DataFrame(randn(0, 3))
@@ -2537,8 +2542,8 @@ def test_pickle(self):
def test_to_dict(self):
test_data = {
- 'A' : {'1' : 1, '2' : 2},
- 'B' : {'1' : '1', '2' : '2', '3' : '3'},
+ 'A': {'1': 1, '2': 2},
+ 'B': {'1': '1', '2': '2', '3': '3'},
}
recons_data = DataFrame(test_data).to_dict()
@@ -2548,13 +2553,13 @@ def test_to_dict(self):
recons_data = DataFrame(test_data).to_dict("l")
- for k,v in test_data.iteritems():
+ for k, v in test_data.iteritems():
for k2, v2 in v.iteritems():
self.assertEqual(v2, recons_data[k][int(k2) - 1])
recons_data = DataFrame(test_data).to_dict("s")
- for k,v in test_data.iteritems():
+ for k, v in test_data.iteritems():
for k2, v2 in v.iteritems():
self.assertEqual(v2, recons_data[k][k2])
@@ -2709,8 +2714,8 @@ def test_to_records_dt64(self):
def test_from_records_to_records(self):
# from numpy documentation
- arr = np.zeros((2,),dtype=('i4,f4,a10'))
- arr[:] = [(1,2.,'Hello'),(2,3.,"World")]
+ arr = np.zeros((2,), dtype=('i4,f4,a10'))
+ arr[:] = [(1, 2., 'Hello'), (2, 3., "World")]
frame = DataFrame.from_records(arr)
@@ -2745,8 +2750,8 @@ def test_from_records_iterator(self):
arr = np.array([(1.0, 2), (3.0, 4), (5., 6), (7., 8)],
dtype=[('x', float), ('y', int)])
df = DataFrame.from_records(iter(arr), nrows=2)
- xp = DataFrame({'x' : np.array([1.0, 3.0], dtype=float),
- 'y' : np.array([2, 4], dtype=int)})
+ xp = DataFrame({'x': np.array([1.0, 3.0], dtype=float),
+ 'y': np.array([2, 4], dtype=int)})
assert_frame_equal(df, xp)
arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)]
@@ -2777,10 +2782,10 @@ def test_from_records_decimal(self):
self.assert_(np.isnan(df['a'].values[-1]))
def test_from_records_duplicates(self):
- result = DataFrame.from_records([(1,2,3), (4,5,6)],
- columns=['a','b','a'])
+ result = DataFrame.from_records([(1, 2, 3), (4, 5, 6)],
+ columns=['a', 'b', 'a'])
- expected = DataFrame([(1,2,3), (4,5,6)],
+ expected = DataFrame([(1, 2, 3), (4, 5, 6)],
columns=['a', 'b', 'a'])
assert_frame_equal(result, expected)
@@ -2819,7 +2824,7 @@ def test_from_records_misc_brokenness(self):
assert_frame_equal(result, exp)
def test_to_records_floats(self):
- df = DataFrame(np.random.rand(10,10))
+ df = DataFrame(np.random.rand(10, 10))
df.to_records()
def test_to_recods_index_name(self):
@@ -2838,25 +2843,25 @@ def test_to_recods_index_name(self):
self.assert_('level_0' in rs.dtype.fields)
def test_join_str_datetime(self):
- str_dates = ['20120209' , '20120222']
- dt_dates = [datetime(2012,2,9), datetime(2012,2,22)]
+ str_dates = ['20120209', '20120222']
+ dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)]
A = DataFrame(str_dates, index=range(2), columns=['aa'])
- C = DataFrame([[1,2],[3,4]], index=str_dates, columns=dt_dates)
+ C = DataFrame([[1, 2], [3, 4]], index=str_dates, columns=dt_dates)
- tst = A.join(C, on = 'aa')
+ tst = A.join(C, on='aa')
self.assert_(len(tst.columns) == 3)
def test_from_records_sequencelike(self):
- df = DataFrame({'A' : np.random.randn(6),
- 'B' : np.arange(6),
- 'C' : ['foo'] * 6,
- 'D' : np.array([True, False] * 3, dtype=bool)})
+ df = DataFrame({'A': np.random.randn(6),
+ 'B': np.arange(6),
+ 'C': ['foo'] * 6,
+ 'D': np.array([True, False] * 3, dtype=bool)})
tuples = [tuple(x) for x in df.values]
lists = [list(x) for x in tuples]
- asdict = dict((x,y) for x, y in df.iteritems())
+ asdict = dict((x, y) for x, y in df.iteritems())
result = DataFrame.from_records(tuples, columns=df.columns)
result2 = DataFrame.from_records(lists, columns=df.columns)
@@ -2870,7 +2875,7 @@ def test_from_records_sequencelike(self):
self.assert_(np.array_equal(result.columns, range(4)))
# test exclude parameter
- result = DataFrame.from_records(tuples, exclude=[0,1,3])
+ result = DataFrame.from_records(tuples, exclude=[0, 1, 3])
result.columns = ['C']
assert_frame_equal(result, df[['C']])
@@ -2884,14 +2889,14 @@ def test_from_records_sequencelike(self):
self.assertEqual(len(result.columns), 0)
def test_from_records_with_index_data(self):
- df = DataFrame(np.random.randn(10,3), columns=['A', 'B', 'C'])
+ df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
data = np.random.randn(10)
df1 = DataFrame.from_records(df, index=data)
assert(df1.index.equals(Index(data)))
def test_from_records_bad_index_column(self):
- df = DataFrame(np.random.randn(10,3), columns=['A', 'B', 'C'])
+ df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
# should pass
df1 = DataFrame.from_records(df, index=['C'])
@@ -2939,9 +2944,9 @@ def test_nonzero(self):
self.assertFalse(self.mixed_frame.empty)
# corner case
- df = DataFrame({'A' : [1., 2., 3.],
- 'B' : ['a', 'b', 'c']},
- index=np.arange(3))
+ df = DataFrame({'A': [1., 2., 3.],
+ 'B': ['a', 'b', 'c']},
+ index=np.arange(3))
del df['A']
self.assertFalse(df.empty)
@@ -2965,9 +2970,9 @@ def test_repr_mixed(self):
@slow
def test_repr_mixed_big(self):
# big mixed
- biggie = DataFrame({'A' : randn(200),
- 'B' : tm.makeStringIndex(200)},
- index=range(200))
+ biggie = DataFrame({'A': randn(200),
+ 'B': tm.makeStringIndex(200)},
+ index=range(200))
biggie['A'][:20] = nan
biggie['B'][:20] = nan
@@ -2993,7 +2998,7 @@ def test_repr(self):
# no columns or index
self.empty.info(buf=buf)
- df = DataFrame(["a\n\r\tb"],columns=["a\n\r\td"],index=["a\n\r\tf"])
+ df = DataFrame(["a\n\r\tb"], columns=["a\n\r\td"], index=["a\n\r\tf"])
self.assertFalse("\t" in repr(df))
self.assertFalse("\r" in repr(df))
self.assertFalse("a\n" in repr(df))
@@ -3015,10 +3020,11 @@ def test_repr_unsortable(self):
category=FutureWarning,
module=".*format")
- unsortable = DataFrame({'foo' : [1] * 50,
- datetime.today() : [1] * 50,
- 'bar' : ['bar'] * 50,
- datetime.today() + timedelta(1) : ['bar'] * 50},
+ unsortable = DataFrame({'foo': [1] * 50,
+ datetime.today(): [1] * 50,
+ 'bar': ['bar'] * 50,
+ datetime.today(
+ ) + timedelta(1): ['bar'] * 50},
index=np.arange(50))
foo = repr(unsortable)
@@ -3120,16 +3126,16 @@ def test_pop(self):
self.assert_('foo' not in self.frame)
def test_pop_non_unique_cols(self):
- df=DataFrame({0:[0,1],1:[0,1],2:[4,5]})
- df.columns=["a","b","a"]
+ df = DataFrame({0: [0, 1], 1: [0, 1], 2: [4, 5]})
+ df.columns = ["a", "b", "a"]
- res=df.pop("a")
- self.assertEqual(type(res),DataFrame)
- self.assertEqual(len(res),2)
- self.assertEqual(len(df.columns),1)
+ res = df.pop("a")
+ self.assertEqual(type(res), DataFrame)
+ self.assertEqual(len(res), 2)
+ self.assertEqual(len(df.columns), 1)
self.assertTrue("b" in df.columns)
self.assertFalse("a" in df.columns)
- self.assertEqual(len(df.index),2)
+ self.assertEqual(len(df.index), 2)
def test_iter(self):
self.assert_(tm.equalContents(list(self.frame), self.frame.columns))
@@ -3147,7 +3153,7 @@ def test_itertuples(self):
for i, tup in enumerate(self.frame.itertuples()):
s = Series(tup[1:])
s.name = tup[0]
- expected = self.frame.ix[i,:].reset_index(drop=True)
+ expected = self.frame.ix[i, :].reset_index(drop=True)
assert_series_equal(s, expected)
df = DataFrame({'floats': np.random.randn(5),
@@ -3186,12 +3192,12 @@ def test_operators(self):
expected = self.frame2 * 2
assert_frame_equal(added, expected)
- df = DataFrame({'a' : ['a', None, 'b']})
- assert_frame_equal(df + df, DataFrame({'a' : ['aa', np.nan, 'bb']}))
+ df = DataFrame({'a': ['a', None, 'b']})
+ assert_frame_equal(df + df, DataFrame({'a': ['aa', np.nan, 'bb']}))
def test_operators_none_as_na(self):
- df = DataFrame({"col1": [2,5.0,123,None],
- "col2": [1,2,3,4]}, dtype=object)
+ df = DataFrame({"col1": [2, 5.0, 123, None],
+ "col2": [1, 2, 3, 4]}, dtype=object)
ops = [operator.add, operator.sub, operator.mul, operator.truediv]
@@ -3284,7 +3290,7 @@ def test_neg(self):
assert_frame_equal(-self.frame, -1 * self.frame)
def test_invert(self):
- assert_frame_equal(-(self.frame < 0), ~(self.frame <0))
+ assert_frame_equal(-(self.frame < 0), ~(self.frame < 0))
def test_first_last_valid(self):
N = len(self.frame.index)
@@ -3292,7 +3298,7 @@ def test_first_last_valid(self):
mat[:5] = nan
mat[-5:] = nan
- frame = DataFrame({'foo' : mat}, index=self.frame.index)
+ frame = DataFrame({'foo': mat}, index=self.frame.index)
index = frame.first_valid_index()
self.assert_(index == frame.index[5])
@@ -3341,9 +3347,8 @@ def test_arith_mixed(self):
'B': [2, 4, 6]})
assert_frame_equal(result, expected)
-
def test_arith_getitem_commute(self):
- df = DataFrame({'A' : [1.1,3.3],'B' : [2.5,-3.9]})
+ df = DataFrame({'A': [1.1, 3.3], 'B': [2.5, -3.9]})
self._test_op(df, operator.add)
self._test_op(df, operator.sub)
@@ -3490,23 +3495,23 @@ def _test_seq(df, idx_ser, col_ser):
# complex
arr = np.array([np.nan, 1, 6, np.nan])
arr2 = np.array([2j, np.nan, 7, None])
- df = DataFrame({'a' : arr})
- df2 = DataFrame({'a' : arr2})
+ df = DataFrame({'a': arr})
+ df2 = DataFrame({'a': arr2})
rs = df.gt(df2)
self.assert_(not rs.values.any())
rs = df.ne(df2)
self.assert_(rs.values.all())
arr3 = np.array([2j, np.nan, None])
- df3 = DataFrame({'a' : arr3})
+ df3 = DataFrame({'a': arr3})
rs = df3.gt(2j)
self.assert_(not rs.values.any())
# corner, dtype=object
- df1 = DataFrame({'col' : ['foo', np.nan, 'bar']})
- df2 = DataFrame({'col' : ['foo', datetime.now(), 'bar']})
+ df1 = DataFrame({'col': ['foo', np.nan, 'bar']})
+ df2 = DataFrame({'col': ['foo', datetime.now(), 'bar']})
result = df1.ne(df2)
- exp = DataFrame({'col' : [False, True, False]})
+ exp = DataFrame({'col': [False, True, False]})
assert_frame_equal(result, exp)
def test_arith_flex_series(self):
@@ -3537,7 +3542,6 @@ def test_arith_non_pandas_object(self):
index=df.index, columns=df.columns)
assert_frame_equal(df.add(val1, axis=0), added)
-
val2 = list(df['two'])
added = DataFrame(df.values + val2, index=df.index, columns=df.columns)
@@ -3559,8 +3563,8 @@ def test_combineFrame(self):
added = self.frame + frame_copy
tm.assert_dict_equal(added['A'].valid(),
- self.frame['A'] * 2,
- compare_keys=False)
+ self.frame['A'] * 2,
+ compare_keys=False)
self.assert_(np.isnan(added['C'].reindex(frame_copy.index)[:5]).all())
@@ -3685,14 +3689,14 @@ def test_comp(func):
test_comp(operator.le)
def test_string_comparison(self):
- df = DataFrame([{ "a" : 1, "b" : "foo" }, {"a" : 2, "b" : "bar"}])
+ df = DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}])
mask_a = df.a > 1
- assert_frame_equal(df[mask_a], df.ix[1:1,:])
- assert_frame_equal(df[-mask_a], df.ix[0:0,:])
+ assert_frame_equal(df[mask_a], df.ix[1:1, :])
+ assert_frame_equal(df[-mask_a], df.ix[0:0, :])
mask_b = df.b == "foo"
- assert_frame_equal(df[mask_b], df.ix[0:0,:])
- assert_frame_equal(df[-mask_b], df.ix[1:1,:])
+ assert_frame_equal(df[mask_b], df.ix[0:0, :])
+ assert_frame_equal(df[-mask_b], df.ix[1:1, :])
def test_float_none_comparison(self):
df = DataFrame(np.random.randn(8, 3), index=range(8),
@@ -3727,15 +3731,13 @@ def test_to_csv_from_csv(self):
assert_almost_equal(self.tsframe.values, recons.values)
# corner case
- dm = DataFrame({'s1' : Series(range(3),range(3)),
- 's2' : Series(range(2),range(2))})
+ dm = DataFrame({'s1': Series(range(3), range(3)),
+ 's2': Series(range(2), range(2))})
dm.to_csv(path)
recons = DataFrame.from_csv(path)
assert_frame_equal(dm, recons)
-
-
- #duplicate index
+ # duplicate index
df = DataFrame(np.random.randn(3, 3), index=['a', 'a', 'b'],
columns=['x', 'y', 'z'])
df.to_csv(path)
@@ -3805,7 +3807,7 @@ def test_to_csv_multiindex(self):
frame = self.frame
old_index = frame.index
- arrays = np.arange(len(old_index)*2).reshape(2,-1)
+ arrays = np.arange(len(old_index) * 2).reshape(2, -1)
new_index = MultiIndex.from_arrays(arrays, names=['first', 'second'])
frame.index = new_index
frame.to_csv(path, header=False)
@@ -3813,11 +3815,11 @@ def test_to_csv_multiindex(self):
# round trip
frame.to_csv(path)
- df = DataFrame.from_csv(path, index_col=[0,1], parse_dates=False)
+ df = DataFrame.from_csv(path, index_col=[0, 1], parse_dates=False)
assert_frame_equal(frame, df)
self.assertEqual(frame.index.names, df.index.names)
- self.frame.index = old_index # needed if setUP becomes a classmethod
+ self.frame.index = old_index # needed if setUP becomes a classmethod
# try multiindex with dates
tsframe = self.tsframe
@@ -3825,8 +3827,8 @@ def test_to_csv_multiindex(self):
new_index = [old_index, np.arange(len(old_index))]
tsframe.index = MultiIndex.from_arrays(new_index)
- tsframe.to_csv(path, index_label = ['time','foo'])
- recons = DataFrame.from_csv(path, index_col=[0,1])
+ tsframe.to_csv(path, index_label=['time', 'foo'])
+ recons = DataFrame.from_csv(path, index_col=[0, 1])
assert_frame_equal(tsframe, recons)
# do not load index
@@ -3838,7 +3840,7 @@ def test_to_csv_multiindex(self):
tsframe.to_csv(path, index=False)
recons = DataFrame.from_csv(path, index_col=None)
assert_almost_equal(recons.values, self.tsframe.values)
- self.tsframe.index = old_index # needed if setUP becomes classmethod
+ self.tsframe.index = old_index # needed if setUP becomes classmethod
os.remove(path)
@@ -3869,7 +3871,7 @@ def test_to_csv_withcommas(self):
path = '__tmp_to_csv_withcommas__'
# Commas inside fields should be correctly escaped when saving as CSV.
- df = DataFrame({'A':[1,2,3], 'B':['5,6','7,8','9,0']})
+ df = DataFrame({'A': [1, 2, 3], 'B': ['5,6', '7,8', '9,0']})
df.to_csv(path)
df2 = DataFrame.from_csv(path)
assert_frame_equal(df2, df)
@@ -3879,7 +3881,7 @@ def test_to_csv_withcommas(self):
def test_to_csv_bug(self):
path = '__tmp_to_csv_bug__.csv'
f1 = StringIO('a,1.0\nb,2.0')
- df = DataFrame.from_csv(f1,header=None)
+ df = DataFrame.from_csv(f1, header=None)
newdf = DataFrame({'t': df[df.columns[0]]})
newdf.to_csv(path)
@@ -3890,7 +3892,7 @@ def test_to_csv_bug(self):
def test_to_csv_unicode(self):
path = '__tmp_to_csv_unicode__.csv'
- df = DataFrame({u'c/\u03c3':[1,2,3]})
+ df = DataFrame({u'c/\u03c3': [1, 2, 3]})
df.to_csv(path, encoding='UTF-8')
df2 = pan.read_csv(path, index_col=0, encoding='UTF-8')
assert_frame_equal(df, df2)
@@ -3902,10 +3904,12 @@ def test_to_csv_unicode(self):
os.remove(path)
def test_to_csv_unicode_index_col(self):
- buf=StringIO('')
- df=DataFrame([[u"\u05d0","d2","d3","d4"],["a1","a2","a3","a4"]],
- columns=[u"\u05d0",u"\u05d1",u"\u05d2",u"\u05d3"],
- index=[u"\u05d0",u"\u05d1"])
+ buf = StringIO('')
+ df = DataFrame(
+ [[u"\u05d0", "d2", "d3", "d4"], ["a1", "a2", "a3", "a4"]],
+ columns=[u"\u05d0",
+ u"\u05d1", u"\u05d2", u"\u05d3"],
+ index=[u"\u05d0", u"\u05d1"])
df.to_csv(buf, encoding='UTF-8')
buf.seek(0)
@@ -3957,7 +3961,7 @@ def test_to_csv_unicodewriter_quoting(self):
buf = StringIO()
df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC,
- encoding = 'utf-8')
+ encoding='utf-8')
result = buf.getvalue()
expected = ('"A","B"\n'
@@ -3992,15 +3996,13 @@ def test_to_csv_line_terminators(self):
self.assertEqual(buf.getvalue(), expected)
buf = StringIO()
- df.to_csv(buf) # The default line terminator remains \n
+ df.to_csv(buf) # The default line terminator remains \n
expected = (',A,B\n'
'one,1,4\n'
'two,2,5\n'
'three,3,6\n')
self.assertEqual(buf.getvalue(), expected)
-
-
def test_info(self):
io = StringIO()
self.frame.info(buf=io)
@@ -4034,7 +4036,6 @@ def test_info_wide(self):
self.assert_(rs == xp)
reset_option('display.max_info_columns')
-
def test_info_duplicate_columns(self):
io = StringIO()
@@ -4058,7 +4059,8 @@ def test_convert_objects(self):
self.assert_(converted['A'].dtype == np.float64)
def test_convert_objects_no_conversion(self):
- mixed1 = DataFrame({'a': [1,2,3], 'b': [4.0, 5, 6], 'c': ['x','y','z']})
+ mixed1 = DataFrame(
+ {'a': [1, 2, 3], 'b': [4.0, 5, 6], 'c': ['x', 'y', 'z']})
mixed2 = mixed1.convert_objects()
assert_frame_equal(mixed1, mixed2)
@@ -4072,7 +4074,7 @@ def test_append_series_dict(self):
self.assertRaises(Exception, df.append, series, verify_integrity=True)
result = df.append(series[::-1], ignore_index=True)
- expected = df.append(DataFrame({0 : series[::-1]}, index=df.columns).T,
+ expected = df.append(DataFrame({0: series[::-1]}, index=df.columns).T,
ignore_index=True)
assert_frame_equal(result, expected)
@@ -4081,7 +4083,7 @@ def test_append_series_dict(self):
assert_frame_equal(result, expected)
result = df.append(series[::-1][:3], ignore_index=True)
- expected = df.append(DataFrame({0 : series[::-1][:3]}).T,
+ expected = df.append(DataFrame({0: series[::-1][:3]}).T,
ignore_index=True)
assert_frame_equal(result, expected.ix[:, result.columns])
@@ -4128,9 +4130,9 @@ def test_asfreq(self):
def test_asfreq_datetimeindex(self):
from pandas import DatetimeIndex
- df = DataFrame({'A': [1,2,3]},
- index=[datetime(2011,11,01), datetime(2011,11,2),
- datetime(2011,11,3)])
+ df = DataFrame({'A': [1, 2, 3]},
+ index=[datetime(2011, 11, 01), datetime(2011, 11, 2),
+ datetime(2011, 11, 3)])
df = df.asfreq('B')
self.assert_(isinstance(df.index, DatetimeIndex))
@@ -4154,7 +4156,7 @@ def test_as_matrix(self):
mat = self.mixed_frame.as_matrix(['foo', 'A'])
self.assertEqual(mat[0, 0], 'bar')
- df = DataFrame({'real' : [1,2,3], 'complex' : [1j, 2j, 3j]})
+ df = DataFrame({'real': [1, 2, 3], 'complex': [1j, 2j, 3j]})
mat = df.as_matrix()
self.assertEqual(mat[0, 0], 1j)
@@ -4271,7 +4273,7 @@ def test_corr_constant(self):
def test_corr_int(self):
# dtypes other than float64 #1761
- df3 = DataFrame({"a":[1,2,3,4], "b":[1,2,3,4]})
+ df3 = DataFrame({"a": [1, 2, 3, 4], "b": [1, 2, 3, 4]})
# it works!
df3.cov()
@@ -4309,7 +4311,6 @@ def test_cov(self):
expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov()
assert_frame_equal(result, expected)
-
def test_corrwith(self):
a = self.tsframe
noise = Series(randn(len(a)), index=a.index)
@@ -4369,7 +4370,7 @@ def test_dropEmptyRows(self):
mat = randn(N)
mat[:5] = nan
- frame = DataFrame({'foo' : mat}, index=self.frame.index)
+ frame = DataFrame({'foo': mat}, index=self.frame.index)
smaller_frame = frame.dropna(how='all')
self.assert_(np.array_equal(smaller_frame['foo'], mat[5:]))
@@ -4382,7 +4383,7 @@ def test_dropIncompleteRows(self):
mat = randn(N)
mat[:5] = nan
- frame = DataFrame({'foo' : mat}, index=self.frame.index)
+ frame = DataFrame({'foo': mat}, index=self.frame.index)
frame['bar'] = 5
smaller_frame = frame.dropna()
@@ -4450,12 +4451,12 @@ def test_dropna_multiple_axes(self):
assert_frame_equal(result2, expected)
def test_drop_duplicates(self):
- df = DataFrame({'AAA' : ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'bar', 'foo'],
- 'B' : ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C' : [1, 1, 2, 2, 2, 2, 1, 2],
- 'D' : range(8)})
+ df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'bar', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': [1, 1, 2, 2, 2, 2, 1, 2],
+ 'D': range(8)})
# single column
result = df.drop_duplicates('AAA')
@@ -4490,12 +4491,12 @@ def test_drop_duplicates(self):
assert_frame_equal(result, expected)
def test_drop_duplicates_tuple(self):
- df = DataFrame({('AA', 'AB') : ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'bar', 'foo'],
- 'B' : ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C' : [1, 1, 2, 2, 2, 2, 1, 2],
- 'D' : range(8)})
+ df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'bar', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': [1, 1, 2, 2, 2, 2, 1, 2],
+ 'D': range(8)})
# single column
result = df.drop_duplicates(('AA', 'AB'))
@@ -4513,12 +4514,12 @@ def test_drop_duplicates_tuple(self):
def test_drop_duplicates_NA(self):
# none
- df = DataFrame({'A' : [None, None, 'foo', 'bar',
- 'foo', 'bar', 'bar', 'foo'],
- 'B' : ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C' : [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.],
- 'D' : range(8)})
+ df = DataFrame({'A': [None, None, 'foo', 'bar',
+ 'foo', 'bar', 'bar', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.],
+ 'D': range(8)})
# single column
result = df.drop_duplicates('A')
@@ -4539,12 +4540,12 @@ def test_drop_duplicates_NA(self):
assert_frame_equal(result, expected)
# nan
- df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'bar', 'foo'],
- 'B' : ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C' : [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.],
- 'D' : range(8)})
+ df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'bar', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.],
+ 'D': range(8)})
# single column
result = df.drop_duplicates('C')
@@ -4565,12 +4566,12 @@ def test_drop_duplicates_NA(self):
assert_frame_equal(result, expected)
def test_drop_duplicates_inplace(self):
- orig = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'bar', 'foo'],
- 'B' : ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C' : [1, 1, 2, 2, 2, 2, 1, 2],
- 'D' : range(8)})
+ orig = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'bar', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': [1, 1, 2, 2, 2, 2, 1, 2],
+ 'D': range(8)})
# single column
df = orig.copy()
@@ -4618,16 +4619,16 @@ def test_drop_duplicates_inplace(self):
assert_frame_equal(result, expected)
def test_drop_col_still_multiindex(self):
- arrays = [[ 'a', 'b', 'c', 'top'],
- [ '', '', '', 'OD' ],
- [ '', '', '', 'wx' ]]
+ arrays = [['a', 'b', 'c', 'top'],
+ ['', '', '', 'OD'],
+ ['', '', '', 'wx']]
tuples = zip(*arrays)
tuples.sort()
index = MultiIndex.from_tuples(tuples)
- df = DataFrame(randn(3,4), columns=index)
- del df[('a','','')]
+ df = DataFrame(randn(3, 4), columns=index)
+ del df[('a', '', '')]
assert(isinstance(df.columns, MultiIndex))
def test_fillna(self):
@@ -4706,7 +4707,7 @@ def test_fillna_dict_series(self):
assert_frame_equal(result, expected)
# it works
- result = df.fillna({'a': 0, 'b': 5, 'd' : 7})
+ result = df.fillna({'a': 0, 'b': 5, 'd': 7})
# Series treated same as dict
result = df.fillna(df.max())
@@ -4791,12 +4792,13 @@ def test_replace_interpolate(self):
padded = self.tsframe.replace(nan, method='pad')
assert_frame_equal(padded, self.tsframe.fillna(method='pad'))
- result = self.tsframe.replace(to_replace={'A' : nan}, method='pad',
+ result = self.tsframe.replace(to_replace={'A': nan}, method='pad',
axis=1)
- expected = self.tsframe.T.replace(to_replace={'A' : nan}, method='pad').T
+ expected = self.tsframe.T.replace(
+ to_replace={'A': nan}, method='pad').T
assert_frame_equal(result, expected)
- result = self.tsframe.replace(to_replace={'A' : nan, 'B' : -1e8},
+ result = self.tsframe.replace(to_replace={'A': nan, 'B': -1e8},
method='bfill')
tsframe = self.tsframe.copy()
b = tsframe['B']
@@ -4831,9 +4833,9 @@ def test_replace_interpolate(self):
def test_replace_dtypes(self):
# int
- df = DataFrame({'ints' : [1,2,3]})
+ df = DataFrame({'ints': [1, 2, 3]})
result = df.replace(1, 0)
- expected = DataFrame({'ints' : [0,2,3]})
+ expected = DataFrame({'ints': [0, 2, 3]})
assert_frame_equal(result, expected)
# bools
@@ -4841,7 +4843,7 @@ def test_replace_dtypes(self):
result = df.replace(False, True)
self.assert_(result.values.all())
- #complex blocks
+ # complex blocks
df = DataFrame({'complex': [1j, 2j, 3j]})
result = df.replace(1j, 0j)
expected = DataFrame({'complex': [0j, 2j, 3j]})
@@ -4850,17 +4852,17 @@ def test_replace_dtypes(self):
# datetime blocks
prev = datetime.today()
now = datetime.today()
- df = DataFrame({'datetime64' : Index([prev, now, prev])})
+ df = DataFrame({'datetime64': Index([prev, now, prev])})
result = df.replace(prev, now)
- expected = DataFrame({'datetime64' : Index([now] * 3)})
+ expected = DataFrame({'datetime64': Index([now] * 3)})
assert_frame_equal(result, expected)
def test_replace_input_formats(self):
# both dicts
- to_rep = {'A' : np.nan, 'B' : 0, 'C' : ''}
- values = {'A' : 0, 'B' : -1, 'C' : 'missing'}
- df = DataFrame({'A' : [np.nan, 0, np.inf], 'B' : [0, 2, 5],
- 'C' : ['', 'asdf', 'fd']})
+ to_rep = {'A': np.nan, 'B': 0, 'C': ''}
+ values = {'A': 0, 'B': -1, 'C': 'missing'}
+ df = DataFrame({'A': [np.nan, 0, np.inf], 'B': [0, 2, 5],
+ 'C': ['', 'asdf', 'fd']})
filled = df.replace(to_rep, values)
expected = {}
for k, v in df.iteritems():
@@ -4868,8 +4870,8 @@ def test_replace_input_formats(self):
assert_frame_equal(filled, DataFrame(expected))
result = df.replace([0, 2, 5], [5, 2, 0])
- expected = DataFrame({'A' : [np.nan, 5, np.inf], 'B' : [5, 2, 0],
- 'C' : ['', 'asdf', 'fd']})
+ expected = DataFrame({'A': [np.nan, 5, np.inf], 'B': [5, 2, 0],
+ 'C': ['', 'asdf', 'fd']})
assert_frame_equal(result, expected)
# dict to scalar
@@ -4882,9 +4884,9 @@ def test_replace_input_formats(self):
self.assertRaises(ValueError, df.replace, to_rep, [np.nan, 0, ''])
# scalar to dict
- values = {'A' : 0, 'B' : -1, 'C' : 'missing'}
- df = DataFrame({'A' : [np.nan, 0, np.nan], 'B' : [0, 2, 5],
- 'C' : ['', 'asdf', 'fd']})
+ values = {'A': 0, 'B': -1, 'C': 'missing'}
+ df = DataFrame({'A': [np.nan, 0, np.nan], 'B': [0, 2, 5],
+ 'C': ['', 'asdf', 'fd']})
filled = df.replace(np.nan, values)
expected = {}
for k, v in df.iteritems():
@@ -5003,8 +5005,8 @@ def test_xs(self):
# mixed-type xs
test_data = {
- 'A' : {'1' : 1, '2' : 2},
- 'B' : {'1' : '1', '2' : '2', '3' : '3'},
+ 'A': {'1': 1, '2': 2},
+ 'B': {'1': '1', '2': '2', '3': '3'},
}
frame = DataFrame(test_data)
xs = frame.xs('1')
@@ -5056,17 +5058,18 @@ def test_xs_duplicates(self):
def test_pivot(self):
data = {
- 'index' : ['A', 'B', 'C', 'C', 'B', 'A'],
- 'columns' : ['One', 'One', 'One', 'Two', 'Two', 'Two'],
- 'values' : [1., 2., 3., 3., 2., 1.]
+ 'index': ['A', 'B', 'C', 'C', 'B', 'A'],
+ 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'],
+ 'values': [1., 2., 3., 3., 2., 1.]
}
frame = DataFrame(data)
- pivoted = frame.pivot(index='index', columns='columns', values='values')
+ pivoted = frame.pivot(
+ index='index', columns='columns', values='values')
expected = DataFrame({
- 'One' : {'A' : 1., 'B' : 2., 'C' : 3.},
- 'Two' : {'A' : 1., 'B' : 2., 'C' : 3.}
+ 'One': {'A': 1., 'B': 2., 'C': 3.},
+ 'Two': {'A': 1., 'B': 2., 'C': 3.}
})
assert_frame_equal(pivoted, expected)
@@ -5087,9 +5090,9 @@ def test_pivot(self):
assert_frame_equal(df.pivot('major', 'minor'), lp.unstack())
def test_pivot_duplicates(self):
- data = DataFrame({'a' : ['bar', 'bar', 'foo', 'foo', 'foo'],
- 'b' : ['one', 'two', 'one', 'one', 'two'],
- 'c' : [1., 2., 3., 3., 4.]})
+ data = DataFrame({'a': ['bar', 'bar', 'foo', 'foo', 'foo'],
+ 'b': ['one', 'two', 'one', 'one', 'two'],
+ 'c': [1., 2., 3., 3., 4.]})
self.assertRaises(Exception, data.pivot, 'a', 'b', 'c')
def test_pivot_empty(self):
@@ -5138,7 +5141,7 @@ def test_reindex(self):
for col, series in nonContigFrame.iteritems():
self.assert_(tm.equalContents(series.index,
- nonContigFrame.index))
+ nonContigFrame.index))
# corner cases
@@ -5258,7 +5261,7 @@ def test_align(self):
other = self.frame.ix[:-5, :3]
af, bf = self.frame.align(other, axis=0, fill_value=-1)
self.assert_(bf.columns.equals(other.columns))
- #test fill value
+ # test fill value
join_idx = self.frame.index.join(other.index)
diff_a = self.frame.index.diff(join_idx)
diff_b = other.index.diff(join_idx)
@@ -5277,7 +5280,7 @@ def test_align(self):
self.assert_(bf.columns.equals(self.frame.columns))
self.assert_(bf.index.equals(other.index))
- #test fill value
+ # test fill value
join_idx = self.frame.index.join(other.index)
diff_a = self.frame.index.diff(join_idx)
diff_b = other.index.diff(join_idx)
@@ -5299,16 +5302,16 @@ def test_align(self):
join='inner', axis=1, method='pad')
self.assert_(bf.columns.equals(self.mixed_frame.columns))
- af, bf = self.frame.align(other.ix[:,0], join='inner', axis=1,
+ af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1,
method=None, fill_value=None)
self.assert_(bf.index.equals(Index([])))
- af, bf = self.frame.align(other.ix[:,0], join='inner', axis=1,
+ af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1,
method=None, fill_value=0)
self.assert_(bf.index.equals(Index([])))
# try to align dataframe to series along bad axis
- self.assertRaises(ValueError, self.frame.align, af.ix[0,:3],
+ self.assertRaises(ValueError, self.frame.align, af.ix[0, :3],
join='inner', axis=2)
def _check_align(self, a, b, axis, fill_axis, how, method, limit=None):
@@ -5324,7 +5327,7 @@ def _check_align(self, a, b, axis, fill_axis, how, method, limit=None):
eb = eb.reindex(index=join_index)
if axis is None or axis == 1:
- join_columns = a.columns.join(b.columns, how=how)
+ join_columns = a.columns.join(b.columns, how=how)
ea = ea.reindex(columns=join_columns)
eb = eb.reindex(columns=join_columns)
@@ -5386,11 +5389,10 @@ def _check_align_fill(self, kind, meth, ax, fax):
self._check_align(empty, empty, axis=ax, fill_axis=fax,
how=kind, method=meth, limit=1)
-
def test_align_int_fill_bug(self):
# GH #910
- X = np.random.rand(10,10)
- Y = np.ones((10,1),dtype=int)
+ X = np.random.rand(10, 10)
+ Y = np.ones((10, 1), dtype=int)
df1 = DataFrame(X)
df1['0.X'] = Y.squeeze()
@@ -5450,10 +5452,8 @@ def test_mask(self):
assert_frame_equal(rs, df.mask(df <= 0))
assert_frame_equal(rs, df.mask(~cond))
-
#----------------------------------------------------------------------
# Transposing
-
def test_transpose(self):
frame = self.frame
dft = frame.T
@@ -5483,10 +5483,10 @@ def test_transpose_get_view(self):
def test_rename(self):
mapping = {
- 'A' : 'a',
- 'B' : 'b',
- 'C' : 'c',
- 'D' : 'd'
+ 'A': 'a',
+ 'B': 'b',
+ 'C': 'c',
+ 'D': 'd'
}
renamed = self.frame.rename(columns=mapping)
@@ -5499,12 +5499,12 @@ def test_rename(self):
# index
data = {
- 'A' : {'foo' : 0, 'bar' : 1}
+ 'A': {'foo': 0, 'bar': 1}
}
# gets sorted alphabetical
df = DataFrame(data)
- renamed = df.rename(index={'foo' : 'bar', 'bar' : 'foo'})
+ renamed = df.rename(index={'foo': 'bar', 'bar': 'foo'})
self.assert_(np.array_equal(renamed.index, ['foo', 'bar']))
renamed = df.rename(index=str.upper)
@@ -5514,26 +5514,26 @@ def test_rename(self):
self.assertRaises(Exception, self.frame.rename)
# partial columns
- renamed = self.frame.rename(columns={'C' : 'foo', 'D' : 'bar'})
+ renamed = self.frame.rename(columns={'C': 'foo', 'D': 'bar'})
self.assert_(np.array_equal(renamed.columns, ['A', 'B', 'foo', 'bar']))
# other axis
- renamed = self.frame.T.rename(index={'C' : 'foo', 'D' : 'bar'})
+ renamed = self.frame.T.rename(index={'C': 'foo', 'D': 'bar'})
self.assert_(np.array_equal(renamed.index, ['A', 'B', 'foo', 'bar']))
def test_rename_nocopy(self):
- renamed = self.frame.rename(columns={'C' : 'foo'}, copy=False)
+ renamed = self.frame.rename(columns={'C': 'foo'}, copy=False)
renamed['foo'] = 1.
self.assert_((self.frame['C'] == 1.).all())
def test_rename_inplace(self):
- self.frame.rename(columns={'C' : 'foo'})
+ self.frame.rename(columns={'C': 'foo'})
self.assert_('C' in self.frame)
self.assert_('foo' not in self.frame)
c_id = id(self.frame['C'])
frame = self.frame.copy()
- res = frame.rename(columns={'C' : 'foo'}, inplace=True)
+ res = frame.rename(columns={'C': 'foo'}, inplace=True)
self.assertTrue(res is None)
@@ -5541,10 +5541,8 @@ def test_rename_inplace(self):
self.assert_('foo' in frame)
self.assert_(id(frame['foo']) != c_id)
-
#----------------------------------------------------------------------
# Time series related
-
def test_diff(self):
the_diff = self.tsframe.diff(1)
@@ -5598,8 +5596,8 @@ def test_pct_change_shift_over_nas(self):
df = DataFrame({'a': s, 'b': s})
chg = df.pct_change()
- expected = Series([np.nan, 0.5, np.nan, 2.5/1.5 -1, .2])
- edf = DataFrame({'a': expected, 'b':expected})
+ expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2])
+ edf = DataFrame({'a': expected, 'b': expected})
assert_frame_equal(chg, edf)
def test_shift(self):
@@ -5652,8 +5650,8 @@ def test_shift(self):
self.assertRaises(ValueError, ps.shift, freq='D')
def test_shift_bool(self):
- df = DataFrame({'high':[True, False],
- 'low':[False, False]})
+ df = DataFrame({'high': [True, False],
+ 'low': [False, False]})
rs = df.shift(1)
xp = DataFrame(np.array([[np.nan, np.nan],
[True, False]], dtype=object),
@@ -5708,10 +5706,11 @@ def test_apply(self):
d = self.frame.index[0]
applied = self.frame.apply(np.mean, axis=1)
self.assertEqual(applied[d], np.mean(self.frame.xs(d)))
- self.assert_(applied.index is self.frame.index) # want this
+ self.assert_(applied.index is self.frame.index) # want this
- #invalid axis
- df = DataFrame([[1,2,3], [4,5,6], [7,8,9]], index=['a','a','c'])
+ # invalid axis
+ df = DataFrame(
+ [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c'])
self.assertRaises(ValueError, df.apply, lambda x: x, 2)
def test_apply_empty(self):
@@ -5732,13 +5731,14 @@ def test_apply_empty(self):
expected = Series(np.nan, index=self.frame.index)
assert_series_equal(result, expected)
- #2476
+ # 2476
xp = DataFrame(index=['a'])
rs = xp.apply(lambda x: x['a'], axis=1)
assert_frame_equal(xp, rs)
def test_apply_standard_nonunique(self):
- df = DataFrame([[1,2,3], [4,5,6], [7,8,9]], index=['a','a','c'])
+ df = DataFrame(
+ [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c'])
rs = df.apply(lambda s: s[0], axis=1)
xp = Series([1, 4, 7], ['a', 'a', 'c'])
assert_series_equal(rs, xp)
@@ -5787,8 +5787,8 @@ def test_apply_ignore_failures(self):
# test with hierarchical index
def test_apply_mixed_dtype_corner(self):
- df = DataFrame({'A' : ['foo'],
- 'B' : [1.]})
+ df = DataFrame({'A': ['foo'],
+ 'B': [1.]})
result = df[:0].apply(np.mean, axis=1)
# the result here is actually kind of ambiguous, should it be a Series
# or a DataFrame?
@@ -5873,18 +5873,18 @@ def test_apply_differently_indexed(self):
assert_frame_equal(result1, expected1)
def test_apply_modify_traceback(self):
- data = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
- 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B' : ['one', 'one', 'one', 'two',
- 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C' : ['dull', 'dull', 'shiny', 'dull',
- 'dull', 'shiny', 'shiny', 'dull',
- 'shiny', 'shiny', 'shiny'],
- 'D' : np.random.randn(11),
- 'E' : np.random.randn(11),
- 'F' : np.random.randn(11)})
+ data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
+ 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two',
+ 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull',
+ 'dull', 'shiny', 'shiny', 'dull',
+ 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
data['C'][4] = np.nan
@@ -5894,8 +5894,8 @@ def transform(row):
return row
def transform2(row):
- if (notnull(row['C']) and row['C'].startswith('shin')
- and row['A'] == 'foo'):
+ if (notnull(row['C']) and row['C'].startswith('shin')
+ and row['A'] == 'foo'):
row['D'] = 7
return row
@@ -5913,18 +5913,18 @@ def test_swapaxes(self):
self.assertRaises(ValueError, df.swapaxes, 2, 5)
def test_apply_convert_objects(self):
- data = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
- 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B' : ['one', 'one', 'one', 'two',
- 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C' : ['dull', 'dull', 'shiny', 'dull',
- 'dull', 'shiny', 'shiny', 'dull',
- 'shiny', 'shiny', 'shiny'],
- 'D' : np.random.randn(11),
- 'E' : np.random.randn(11),
- 'F' : np.random.randn(11)})
+ data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
+ 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two',
+ 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull',
+ 'dull', 'shiny', 'shiny', 'dull',
+ 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
result = data.apply(lambda x: x, axis=1)
assert_frame_equal(result, data)
@@ -5979,7 +5979,7 @@ def test_filter(self):
self.assert_('AA' in filtered)
# like with ints in column names
- df = DataFrame(0., index=[0,1,2], columns=[0,1,'_A','_B'])
+ df = DataFrame(0., index=[0, 1, 2], columns=[0, 1, '_A', '_B'])
filtered = df.filter(like='_')
self.assertEqual(len(filtered.columns), 2)
@@ -6090,8 +6090,8 @@ def test_sort_index_multicolumn(self):
B = np.tile(np.arange(5), 20)
random.shuffle(A)
random.shuffle(B)
- frame = DataFrame({'A' : A, 'B' : B,
- 'C' : np.random.randn(100)})
+ frame = DataFrame({'A': A, 'B': B,
+ 'C': np.random.randn(100)})
result = frame.sort_index(by=['A', 'B'])
indexer = np.lexsort((frame['B'], frame['A']))
@@ -6150,8 +6150,8 @@ def test_sort_index_different_sortorder(self):
A = A.take(indexer)
B = B.take(indexer)
- df = DataFrame({'A' : A, 'B' : B,
- 'C' : np.random.randn(100)})
+ df = DataFrame({'A': A, 'B': B,
+ 'C': np.random.randn(100)})
result = df.sort_index(by=['A', 'B'], ascending=[1, 0])
@@ -6209,7 +6209,7 @@ def test_frame_column_inplace_sort_exception(self):
self.assertRaises(Exception, s.sort)
cp = s.copy()
- cp.sort() # it works!
+ cp.sort() # it works!
def test_combine_first(self):
# disjoint
@@ -6263,34 +6263,33 @@ def test_combine_first(self):
comb = self.empty.combine_first(self.frame)
assert_frame_equal(comb, self.frame)
- comb = self.frame.combine_first(DataFrame(index=["faz","boo"]))
+ comb = self.frame.combine_first(DataFrame(index=["faz", "boo"]))
self.assertTrue("faz" in comb.index)
# #2525
- df = DataFrame({'a': [1]}, index=[datetime(2012,1,1)])
+ df = DataFrame({'a': [1]}, index=[datetime(2012, 1, 1)])
df2 = DataFrame({}, columns=['b'])
result = df.combine_first(df2)
self.assertTrue('b' in result)
def test_combine_first_mixed_bug(self):
- idx = Index(['a','b','c','e'])
- ser1 = Series([5.0,-9.0,4.0,100.],index=idx)
+ idx = Index(['a', 'b', 'c', 'e'])
+ ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx)
ser2 = Series(['a', 'b', 'c', 'e'], index=idx)
- ser3 = Series([12,4,5,97], index=idx)
-
- frame1 = DataFrame({"col0" : ser1,
- "col2" : ser2,
- "col3" : ser3})
+ ser3 = Series([12, 4, 5, 97], index=idx)
- idx = Index(['a','b','c','f'])
- ser1 = Series([5.0,-9.0,4.0,100.], index=idx)
- ser2 = Series(['a','b','c','f'], index=idx)
- ser3 = Series([12,4,5,97],index=idx)
+ frame1 = DataFrame({"col0": ser1,
+ "col2": ser2,
+ "col3": ser3})
- frame2 = DataFrame({"col1" : ser1,
- "col2" : ser2,
- "col5" : ser3})
+ idx = Index(['a', 'b', 'c', 'f'])
+ ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx)
+ ser2 = Series(['a', 'b', 'c', 'f'], index=idx)
+ ser3 = Series([12, 4, 5, 97], index=idx)
+ frame2 = DataFrame({"col1": ser1,
+ "col2": ser2,
+ "col5": ser3})
combined = frame1.combine_first(frame2)
self.assertEqual(len(combined.columns), 5)
@@ -6353,10 +6352,10 @@ def test_update_raise(self):
[1.5, nan, 3]])
other = DataFrame([[2., nan],
- [nan, 7]], index=[1, 3], columns=[1,2])
+ [nan, 7]], index=[1, 3], columns=[1, 2])
np.testing.assert_raises(Exception, df.update, *(other,),
- **{'raise_conflict' : True})
+ **{'raise_conflict': True})
def test_update_from_non_df(self):
d = {'a': Series([1, 2, 3, 4]), 'b': Series([5, 6, 7, 8])}
@@ -6408,17 +6407,17 @@ def test_combineAdd(self):
assert_frame_equal(comb, self.frame)
# integer corner case
- df1 = DataFrame({'x':[5]})
- df2 = DataFrame({'x':[1]})
- df3 = DataFrame({'x':[6]})
+ df1 = DataFrame({'x': [5]})
+ df2 = DataFrame({'x': [1]})
+ df3 = DataFrame({'x': [6]})
comb = df1.combineAdd(df2)
assert_frame_equal(comb, df3)
# mixed type GH2191
- df1 = DataFrame({'A' : [1, 2], 'B' : [3, 4]})
- df2 = DataFrame({'A' : [1, 2], 'C' : [5, 6]})
+ df1 = DataFrame({'A': [1, 2], 'B': [3, 4]})
+ df2 = DataFrame({'A': [1, 2], 'C': [5, 6]})
rs = df1.combineAdd(df2)
- xp = DataFrame({'A' : [2, 4], 'B' : [3, 4.], 'C' : [5, 6.]})
+ xp = DataFrame({'A': [2, 4], 'B': [3, 4.], 'C': [5, 6.]})
assert_frame_equal(xp, rs)
# TODO: test integer fill corner?
@@ -6468,17 +6467,17 @@ def test_get_X_columns(self):
# numeric and object columns
# Booleans get casted to float in DataFrame, so skip for now
- df = DataFrame({'a' : [1, 2, 3],
- # 'b' : [True, False, True],
- 'c' : ['foo', 'bar', 'baz'],
- 'd' : [None, None, None],
- 'e' : [3.14, 0.577, 2.773]})
+ df = DataFrame({'a': [1, 2, 3],
+ # 'b' : [True, False, True],
+ 'c': ['foo', 'bar', 'baz'],
+ 'd': [None, None, None],
+ 'e': [3.14, 0.577, 2.773]})
self.assert_(np.array_equal(df._get_numeric_data().columns,
['a', 'e']))
def test_get_numeric_data(self):
- df = DataFrame({'a' : 1., 'b' : 2, 'c' : 'foo'},
+ df = DataFrame({'a': 1., 'b': 2, 'c': 'foo'},
index=np.arange(10))
result = df._get_numeric_data()
@@ -6526,13 +6525,13 @@ def test_sum(self):
def test_stat_operators_attempt_obj_array(self):
data = {
'a': [-0.00049987540199591344, -0.0016467257772919831,
- 0.00067695870775883013],
+ 0.00067695870775883013],
'b': [-0, -0, 0.0],
'c': [0.00031111847529610595, 0.0014902627951905339,
-0.00094099200035979691]
}
df1 = DataFrame(data, index=['foo', 'bar', 'baz'],
- dtype='O')
+ dtype='O')
methods = ['sum', 'mean', 'prod', 'var', 'std', 'skew', 'min', 'max']
# GH #676
@@ -6580,7 +6579,7 @@ def test_cummin(self):
assert_frame_equal(cummin, expected)
# works
- df = DataFrame({'A' : np.arange(20)}, index=np.arange(20))
+ df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
result = df.cummin()
# fix issue
@@ -6603,14 +6602,13 @@ def test_cummax(self):
assert_frame_equal(cummax, expected)
# works
- df = DataFrame({'A' : np.arange(20)}, index=np.arange(20))
+ df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
result = df.cummax()
# fix issue
cummax_xs = self.tsframe.cummax(axis=1)
self.assertEqual(np.shape(cummax_xs), np.shape(self.tsframe))
-
def test_max(self):
self._check_stat_op('max', np.max)
self._check_stat_op('max', np.max, frame=self.intframe)
@@ -6697,7 +6695,7 @@ def wrapper(x):
result1 = f(axis=1, skipna=False)
assert_series_equal(result0, frame.apply(wrapper))
assert_series_equal(result1, frame.apply(wrapper, axis=1),
- check_dtype=False) # HACK: win32
+ check_dtype=False) # HACK: win32
else:
skipna_wrapper = alternative
wrapper = alternative
@@ -6743,7 +6741,7 @@ def test_sum_corner(self):
def test_sum_object(self):
values = self.frame.values.astype(int)
frame = DataFrame(values, index=self.frame.index,
- columns=self.frame.columns)
+ columns=self.frame.columns)
deltas = frame * timedelta(1)
deltas.sum()
@@ -6795,7 +6793,7 @@ def test_quantile(self):
self.assertEqual(q['A'], scoreatpercentile(self.intframe['A'], 10))
# test degenerate case
- q = DataFrame({'x':[],'y':[]}).quantile(0.1, axis=0)
+ q = DataFrame({'x': [], 'y': []}).quantile(0.1, axis=0)
assert(np.isnan(q['x']) and np.isnan(q['y']))
def test_cumsum(self):
@@ -6814,7 +6812,7 @@ def test_cumsum(self):
assert_frame_equal(cumsum, expected)
# works
- df = DataFrame({'A' : np.arange(20)}, index=np.arange(20))
+ df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
result = df.cumsum()
# fix issue
@@ -6855,7 +6853,7 @@ def test_rank(self):
ranks0 = self.frame.rank()
ranks1 = self.frame.rank(1)
- mask = np.isnan(self.frame.values)
+ mask = np.isnan(self.frame.values)
fvals = self.frame.fillna(np.inf).values
@@ -6882,7 +6880,7 @@ def test_rank(self):
def test_rank2(self):
from datetime import datetime
- df = DataFrame([['b','c','a'],['a','c','b']])
+ df = DataFrame([['b', 'c', 'a'], ['a', 'c', 'b']])
expected = DataFrame([[2.0, 3.0, 1.0], [1, 3, 2]])
result = df.rank(1, numeric_only=False)
assert_frame_equal(result, expected)
@@ -6891,7 +6889,7 @@ def test_rank2(self):
result = df.rank(0, numeric_only=False)
assert_frame_equal(result, expected)
- df = DataFrame([['b',np.nan,'a'],['a','c','b']])
+ df = DataFrame([['b', np.nan, 'a'], ['a', 'c', 'b']])
expected = DataFrame([[2.0, nan, 1.0], [1.0, 3.0, 2.0]])
result = df.rank(1, numeric_only=False)
assert_frame_equal(result, expected)
@@ -6924,7 +6922,7 @@ def test_rank_na_option(self):
self.frame['C'][::4] = np.nan
self.frame['D'][::5] = np.nan
- #bottom
+ # bottom
ranks0 = self.frame.rank(na_option='bottom')
ranks1 = self.frame.rank(1, na_option='bottom')
@@ -6936,7 +6934,7 @@ def test_rank_na_option(self):
assert_almost_equal(ranks0.values, exp0)
assert_almost_equal(ranks1.values, exp1)
- #top
+ # top
ranks0 = self.frame.rank(na_option='top')
ranks1 = self.frame.rank(1, na_option='top')
@@ -6951,9 +6949,9 @@ def test_rank_na_option(self):
assert_almost_equal(ranks0.values, exp0)
assert_almost_equal(ranks1.values, exp1)
- #descending
+ # descending
- #bottom
+ # bottom
ranks0 = self.frame.rank(na_option='top', ascending=False)
ranks1 = self.frame.rank(1, na_option='top', ascending=False)
@@ -6965,9 +6963,9 @@ def test_rank_na_option(self):
assert_almost_equal(ranks0.values, exp0)
assert_almost_equal(ranks1.values, exp1)
- #descending
+ # descending
- #top
+ # top
ranks0 = self.frame.rank(na_option='bottom', ascending=False)
ranks1 = self.frame.rank(1, na_option='bottom', ascending=False)
@@ -6982,7 +6980,6 @@ def test_rank_na_option(self):
assert_almost_equal(ranks0.values, exp0)
assert_almost_equal(ranks1.values, exp1)
-
def test_describe(self):
desc = self.tsframe.describe()
desc = self.mixed_frame.describe()
@@ -6998,21 +6995,21 @@ def test_describe_percentiles(self):
assert '2.5%' in desc.index
def test_describe_no_numeric(self):
- df = DataFrame({'A' : ['foo', 'foo', 'bar'] * 8,
- 'B' : ['a', 'b', 'c', 'd'] * 6})
+ df = DataFrame({'A': ['foo', 'foo', 'bar'] * 8,
+ 'B': ['a', 'b', 'c', 'd'] * 6})
desc = df.describe()
expected = DataFrame(dict((k, v.describe())
for k, v in df.iteritems()),
columns=df.columns)
assert_frame_equal(desc, expected)
- df = DataFrame({'time' : self.tsframe.index})
+ df = DataFrame({'time': self.tsframe.index})
desc = df.describe()
assert(desc.time['first'] == min(self.tsframe.index))
def test_describe_empty_int_columns(self):
df = DataFrame([[0, 1], [1, 2]])
- desc = df[df[0] < 0].describe() #works
+ desc = df[df[0] < 0].describe() # works
assert_series_equal(desc.xs('count'),
Series([0, 0], dtype=float, name='count'))
self.assert_(isnull(desc.ix[1:]).all().all())
@@ -7030,13 +7027,13 @@ def test_get_axis_etc(self):
self.assertRaises(Exception, f._get_axis_number, 2)
def test_combine_first_mixed(self):
- a = Series(['a','b'], index=range(2))
+ a = Series(['a', 'b'], index=range(2))
b = Series(range(2), index=range(2))
- f = DataFrame({'A' : a, 'B' : b})
+ f = DataFrame({'A': a, 'B': b})
- a = Series(['a','b'], index=range(5, 7))
+ a = Series(['a', 'b'], index=range(5, 7))
b = Series(range(2), index=range(5, 7))
- g = DataFrame({'A' : a, 'B' : b})
+ g = DataFrame({'A': a, 'B': b})
combined = f.combine_first(g)
@@ -7046,8 +7043,8 @@ def test_more_asMatrix(self):
def test_reindex_boolean(self):
frame = DataFrame(np.ones((10, 2), dtype=bool),
- index=np.arange(0, 20, 2),
- columns=[0, 2])
+ index=np.arange(0, 20, 2),
+ columns=[0, 2])
reindexed = frame.reindex(np.arange(10))
self.assert_(reindexed.values.dtype == np.object_)
@@ -7094,7 +7091,7 @@ def test_reindex_axis(self):
assert_frame_equal(newFrame, self.frame)
def test_reindex_with_nans(self):
- df = DataFrame([[1,2], [3,4], [np.nan,np.nan], [7,8], [9,10]],
+ df = DataFrame([[1, 2], [3, 4], [np.nan, np.nan], [7, 8], [9, 10]],
columns=['a', 'b'],
index=[100.0, 101.0, np.nan, 102.0, 103.0])
@@ -7132,9 +7129,9 @@ def test_reindex_multi(self):
assert_frame_equal(result, expected)
- df = DataFrame(np.random.randn(5, 3) + 1j, columns=['a','b','c'])
+ df = DataFrame(np.random.randn(5, 3) + 1j, columns=['a', 'b', 'c'])
- result = df.reindex(index=[0,1], columns=['a', 'b'])
+ result = df.reindex(index=[0, 1], columns=['a', 'b'])
expected = df.reindex([0, 1]).reindex(columns=['a', 'b'])
assert_frame_equal(result, expected)
@@ -7164,7 +7161,7 @@ def test_count_objects(self):
def test_cumsum_corner(self):
dm = DataFrame(np.arange(20).reshape(4, 5),
- index=range(4), columns=range(5))
+ index=range(4), columns=range(5))
result = dm.cumsum()
#----------------------------------------------------------------------
@@ -7172,7 +7169,7 @@ def test_cumsum_corner(self):
def test_stack_unstack(self):
stacked = self.frame.stack()
- stacked_df = DataFrame({'foo' : stacked, 'bar' : stacked})
+ stacked_df = DataFrame({'foo': stacked, 'bar': stacked})
unstacked = stacked.unstack()
unstacked_df = stacked_df.unstack()
@@ -7207,12 +7204,12 @@ def test_unstack_to_series(self):
# check NA handling
data = DataFrame({'x': [1, 2, np.NaN], 'y': [3.0, 4, np.NaN]})
- data.index = Index(['a','b','c'])
+ data.index = Index(['a', 'b', 'c'])
result = data.unstack()
- midx = MultiIndex(levels=[['x','y'],['a','b','c']],
- labels=[[0,0,0,1,1,1],[0,1,2,0,1,2]])
- expected = Series([1,2,np.NaN,3,4,np.NaN], index=midx)
+ midx = MultiIndex(levels=[['x', 'y'], ['a', 'b', 'c']],
+ labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]])
+ expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx)
assert_series_equal(result, expected)
@@ -7224,7 +7221,7 @@ def test_unstack_to_series(self):
def test_reset_index(self):
stacked = self.frame.stack()[::2]
- stacked = DataFrame({'foo' : stacked, 'bar' : stacked})
+ stacked = DataFrame({'foo': stacked, 'bar': stacked})
names = ['first', 'second']
stacked.index.names = names
@@ -7280,7 +7277,7 @@ def test_reset_index(self):
xp = self.frame.reset_index().set_index(['index', 'B'])
assert_frame_equal(rs, xp)
- #test resetting in place
+ # test resetting in place
df = self.frame.copy()
resetted = self.frame.reset_index()
res = df.reset_index(inplace=True)
@@ -7295,8 +7292,8 @@ def test_reset_index(self):
assert_frame_equal(rs, xp)
def test_reset_index_right_dtype(self):
- time = np.arange(0.0, 10, np.sqrt(2)/2)
- s1 = Series((9.81 * time ** 2) /2,
+ time = np.arange(0.0, 10, np.sqrt(2) / 2)
+ s1 = Series((9.81 * time ** 2) / 2,
index=Index(time, name='time'),
name='speed')
df = DataFrame(s1)
@@ -7350,10 +7347,8 @@ def test_reset_index_multiindex_col(self):
['a', 'mean', 'median', 'mean']])
assert_frame_equal(rs, xp)
-
#----------------------------------------------------------------------
# Tests to cope with refactored internals
-
def test_as_matrix_numeric_cols(self):
self.frame['foo'] = 'bar'
@@ -7379,7 +7374,7 @@ def test_constructor_ndarray_copy(self):
def test_constructor_series_copy(self):
series = self.frame._series
- df = DataFrame({'A' : series['A']})
+ df = DataFrame({'A': series['A']})
df['A'][:] = 5
self.assert_(not (series['A'] == 5).all())
@@ -7479,9 +7474,9 @@ def test_boolean_indexing(self):
data=np.ones((len(idx), len(cols))))
expected = DataFrame(index=idx, columns=cols,
- data=np.array([[0.0, 0.5, 1.0],
- [1.5, 2.0, -1],
- [-1, -1, -1]], dtype=float))
+ data=np.array([[0.0, 0.5, 1.0],
+ [1.5, 2.0, -1],
+ [-1, -1, -1]], dtype=float))
df1[df1 > 2.0 * df2] = -1
assert_frame_equal(df1, expected)
@@ -7535,12 +7530,12 @@ def test_dot(self):
expected = DataFrame(np.dot(a.values, b.values),
index=['a', 'b', 'c'],
columns=['one', 'two'])
- #Check alignment
+ # Check alignment
b1 = b.reindex(index=reversed(b.index))
result = a.dot(b)
assert_frame_equal(result, expected)
- #Check series argument
+ # Check series argument
result = a.dot(b['one'])
assert_series_equal(result, expected['one'])
result = a.dot(b1['one'])
@@ -7555,8 +7550,8 @@ def test_dot(self):
self.assertRaises(Exception, a.dot, row[:-1])
- a = np.random.rand(1,5)
- b = np.random.rand(5,1)
+ a = np.random.rand(1, 5)
+ b = np.random.rand(5, 1)
A = DataFrame(a)
B = DataFrame(b)
@@ -7577,7 +7572,8 @@ def test_idxmin(self):
for axis in [0, 1]:
for df in [frame, self.intframe]:
result = df.idxmin(axis=axis, skipna=skipna)
- expected = df.apply(Series.idxmin, axis=axis, skipna=skipna)
+ expected = df.apply(
+ Series.idxmin, axis=axis, skipna=skipna)
assert_series_equal(result, expected)
self.assertRaises(Exception, frame.idxmin, axis=2)
@@ -7590,14 +7586,15 @@ def test_idxmax(self):
for axis in [0, 1]:
for df in [frame, self.intframe]:
result = df.idxmax(axis=axis, skipna=skipna)
- expected = df.apply(Series.idxmax, axis=axis, skipna=skipna)
+ expected = df.apply(
+ Series.idxmax, axis=axis, skipna=skipna)
assert_series_equal(result, expected)
self.assertRaises(Exception, frame.idxmax, axis=2)
def test_stale_cached_series_bug_473(self):
- Y = DataFrame(np.random.random((4, 4)), index=('a', 'b','c','d'),
- columns=('e','f','g','h'))
+ Y = DataFrame(np.random.random((4, 4)), index=('a', 'b', 'c', 'd'),
+ columns=('e', 'f', 'g', 'h'))
repr(Y)
Y['e'] = Y['e'].astype('object')
Y['g']['c'] = np.NaN
@@ -7626,7 +7623,6 @@ def test_any_all(self):
self._check_bool_op('any', np.any, has_skipna=True, has_bool_only=True)
self._check_bool_op('all', np.all, has_skipna=True, has_bool_only=True)
-
df = DataFrame(randn(10, 4)) > 0
df.any(1)
df.all(1)
@@ -7664,7 +7660,7 @@ def test_consolidate_datetime64(self):
2012-06-25 08:00,2012-06-26 12:00,0
2012-06-26 12:00,2012-06-27 08:00,77
"""
- df = read_csv(StringIO(data), parse_dates=[0,1])
+ df = read_csv(StringIO(data), parse_dates=[0, 1])
ser_starting = df.starting
ser_starting.index = ser_starting.values
@@ -7706,7 +7702,7 @@ def wrapper(x):
result1 = f(axis=1, skipna=False)
assert_series_equal(result0, frame.apply(wrapper))
assert_series_equal(result1, frame.apply(wrapper, axis=1),
- check_dtype=False) # HACK: win32
+ check_dtype=False) # HACK: win32
else:
skipna_wrapper = alternative
wrapper = alternative
@@ -7755,11 +7751,11 @@ def __nonzero__(self):
self.assert_(r1.all())
def test_strange_column_corruption_issue(self):
- df = DataFrame(index=[0,1])
+ df = DataFrame(index=[0, 1])
df[0] = nan
wasCol = {}
# uncommenting these makes the results match
- #for col in xrange(100, 200):
+ # for col in xrange(100, 200):
# wasCol[col] = 1
# df[col] = nan
@@ -7782,5 +7778,5 @@ def test_strange_column_corruption_issue(self):
import nose
# nose.runmodule(argv=[__file__,'-vvs','-x', '--ipdb-failure'],
# exit=False)
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 79f264840f37c..c3c4ddc1614b3 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -61,7 +61,7 @@ def test_plot(self):
_check_plot_works(self.series[:5].plot, kind='barh')
_check_plot_works(self.series[:10].plot, kind='barh')
- Series(np.random.randn(10)).plot(kind='bar',color='black')
+ Series(np.random.randn(10)).plot(kind='bar', color='black')
@slow
def test_bar_colors(self):
@@ -128,7 +128,7 @@ def test_rotation(self):
@slow
def test_irregular_datetime(self):
rng = date_range('1/1/2000', '3/1/2000')
- rng = rng[[0,1,2,3,5,9,10,11,12]]
+ rng = rng[[0, 1, 2, 3, 5, 9, 10, 11, 12]]
ser = Series(np.random.randn(len(rng)), rng)
ax = ser.plot()
xp = datetime(1999, 1, 1).toordinal()
@@ -166,12 +166,13 @@ def test_bootstrap_plot(self):
from pandas.tools.plotting import bootstrap_plot
_check_plot_works(bootstrap_plot, self.ts, size=10)
+
class TestDataFramePlots(unittest.TestCase):
@classmethod
def setUpClass(cls):
- #import sys
- #if 'IPython' in sys.modules:
+ # import sys
+ # if 'IPython' in sys.modules:
# raise nose.SkipTest
try:
@@ -187,7 +188,7 @@ def test_plot(self):
_check_plot_works(df.plot, subplots=True)
_check_plot_works(df.plot, subplots=True, use_index=False)
- df = DataFrame({'x':[1,2], 'y':[3,4]})
+ df = DataFrame({'x': [1, 2], 'y': [3, 4]})
self._check_plot_fails(df.plot, kind='line', blarg=True)
df = DataFrame(np.random.rand(10, 3),
@@ -215,7 +216,7 @@ def test_plot(self):
(u'\u03b4', 6),
(u'\u03b4', 7)], names=['i0', 'i1'])
columns = MultiIndex.from_tuples([('bar', u'\u0394'),
- ('bar', u'\u0395')], names=['c0', 'c1'])
+ ('bar', u'\u0395')], names=['c0', 'c1'])
df = DataFrame(np.random.randint(0, 10, (8, 2)),
columns=columns,
index=index)
@@ -265,7 +266,6 @@ def test_plot_xy(self):
# columns.inferred_type == 'mixed'
# TODO add MultiIndex test
-
@slow
def test_xcompat(self):
import pandas as pd
@@ -289,7 +289,7 @@ def test_xcompat(self):
self.assert_(isinstance(lines[0].get_xdata(), PeriodIndex))
plt.close('all')
- #useful if you're plotting a bunch together
+ # useful if you're plotting a bunch together
with pd.plot_params.use('x_compat', True):
ax = df.plot()
lines = ax.get_lines()
@@ -361,15 +361,15 @@ def test_plot_bar(self):
@slow
def test_bar_stacked_center(self):
- #GH2157
- df = DataFrame({'A' : [3] * 5, 'B' : range(5)}, index = range(5))
+ # GH2157
+ df = DataFrame({'A': [3] * 5, 'B': range(5)}, index=range(5))
ax = df.plot(kind='bar', stacked='True', grid=True)
self.assertEqual(ax.xaxis.get_ticklocs()[0],
ax.patches[0].get_x() + ax.patches[0].get_width() / 2)
@slow
def test_bar_center(self):
- df = DataFrame({'A' : [3] * 5, 'B' : range(5)}, index = range(5))
+ df = DataFrame({'A': [3] * 5, 'B': range(5)}, index=range(5))
ax = df.plot(kind='bar', grid=True)
self.assertEqual(ax.xaxis.get_ticklocs()[0],
ax.patches[0].get_x() + ax.patches[0].get_width())
@@ -395,8 +395,8 @@ def test_boxplot(self):
_check_plot_works(df.boxplot, notch=1)
_check_plot_works(df.boxplot, by='indic', notch=1)
- df = DataFrame(np.random.rand(10,2), columns=['Col1', 'Col2'] )
- df['X'] = Series(['A','A','A','A','A','B','B','B','B','B'])
+ df = DataFrame(np.random.rand(10, 2), columns=['Col1', 'Col2'])
+ df['X'] = Series(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'])
_check_plot_works(df.boxplot, by='X')
@slow
@@ -415,7 +415,7 @@ def test_hist(self):
_check_plot_works(df.hist)
_check_plot_works(df.hist, grid=False)
- #make sure layout is handled
+ # make sure layout is handled
df = DataFrame(np.random.randn(100, 3))
_check_plot_works(df.hist)
axes = df.hist(grid=False)
@@ -424,14 +424,14 @@ def test_hist(self):
df = DataFrame(np.random.randn(100, 1))
_check_plot_works(df.hist)
- #make sure layout is handled
+ # make sure layout is handled
df = DataFrame(np.random.randn(100, 6))
_check_plot_works(df.hist)
- #make sure sharex, sharey is handled
+ # make sure sharex, sharey is handled
_check_plot_works(df.hist, sharex=True, sharey=True)
- #make sure kwargs are handled
+ # make sure kwargs are handled
ser = df[0]
xf, yf = 20, 20
xrot, yrot = 30, 30
@@ -461,6 +461,7 @@ def test_scatter(self):
df = DataFrame(np.random.randn(100, 4))
import pandas.tools.plotting as plt
+
def scat(**kwds):
return plt.scatter_matrix(df, **kwds)
_check_plot_works(scat)
@@ -501,7 +502,8 @@ def test_parallel_coordinates(self):
_check_plot_works(parallel_coordinates, df, 'Name',
colors=['dodgerblue', 'aquamarine', 'seagreen'])
- df = read_csv(path, header=None, skiprows=1, names=[1,2,4,8, 'Name'])
+ df = read_csv(
+ path, header=None, skiprows=1, names=[1, 2, 4, 8, 'Name'])
_check_plot_works(parallel_coordinates, df, 'Name', use_columns=True)
_check_plot_works(parallel_coordinates, df, 'Name',
xticks=[1, 5, 25, 125])
@@ -604,8 +606,8 @@ class TestDataFrameGroupByPlots(unittest.TestCase):
@classmethod
def setUpClass(cls):
- #import sys
- #if 'IPython' in sys.modules:
+ # import sys
+ # if 'IPython' in sys.modules:
# raise nose.SkipTest
try:
@@ -616,8 +618,8 @@ def setUpClass(cls):
@slow
def test_boxplot(self):
- df = DataFrame(np.random.rand(10,2), columns=['Col1', 'Col2'] )
- df['X'] = Series(['A','A','A','A','A','B','B','B','B','B'])
+ df = DataFrame(np.random.rand(10, 2), columns=['Col1', 'Col2'])
+ df['X'] = Series(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'])
grouped = df.groupby(by='X')
_check_plot_works(grouped.boxplot)
_check_plot_works(grouped.boxplot, subplots=False)
@@ -632,8 +634,6 @@ def test_boxplot(self):
_check_plot_works(grouped.boxplot)
_check_plot_works(grouped.boxplot, subplots=False)
-
-
@slow
def test_series_plot_color_kwargs(self):
# #1890
@@ -650,7 +650,8 @@ def test_time_series_plot_color_kwargs(self):
import matplotlib.pyplot as plt
plt.close('all')
- ax = Series(np.arange(12) + 1, index=date_range('1/1/2000', periods=12)).plot(color='green')
+ ax = Series(np.arange(12) + 1, index=date_range(
+ '1/1/2000', periods=12)).plot(color='green')
line = ax.get_lines()[0]
self.assert_(line.get_color() == 'green')
@@ -672,6 +673,7 @@ def test_grouped_hist(self):
PNG_PATH = 'tmp.png'
+
def _check_plot_works(f, *args, **kwargs):
import matplotlib.pyplot as plt
@@ -691,10 +693,11 @@ def _check_plot_works(f, *args, **kwargs):
plt.savefig(PNG_PATH)
os.remove(PNG_PATH)
+
def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
return pth
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index ec7f5421a0aa9..b9ffcc553bca6 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -24,12 +24,13 @@
import pandas.util.testing as tm
+
def commonSetUp(self):
self.dateRange = bdate_range('1/1/2005', periods=250)
self.stringIndex = Index([rands(8).upper() for x in xrange(250)])
self.groupId = Series([x[0] for x in self.stringIndex],
- index=self.stringIndex)
+ index=self.stringIndex)
self.groupDict = dict((k, v) for k, v in self.groupId.iteritems())
self.columnIndex = Index(['A', 'B', 'C', 'D', 'E'])
@@ -41,6 +42,7 @@ def commonSetUp(self):
self.timeMatrix = DataFrame(randMat, columns=self.columnIndex,
index=self.dateRange)
+
class TestGroupBy(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -53,12 +55,12 @@ def setUp(self):
self.frame = DataFrame(self.seriesd)
self.tsframe = DataFrame(self.tsd)
- self.df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'foo', 'foo'],
- 'B' : ['one', 'one', 'two', 'three',
- 'two', 'two', 'one', 'three'],
- 'C' : np.random.randn(8),
- 'D' : np.random.randn(8)})
+ self.df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'three',
+ 'two', 'two', 'one', 'three'],
+ 'C': np.random.randn(8),
+ 'D': np.random.randn(8)})
index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
['one', 'two', 'three']],
@@ -68,18 +70,18 @@ def setUp(self):
self.mframe = DataFrame(np.random.randn(10, 3), index=index,
columns=['A', 'B', 'C'])
- self.three_group = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
- 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B' : ['one', 'one', 'one', 'two',
- 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C' : ['dull', 'dull', 'shiny', 'dull',
- 'dull', 'shiny', 'shiny', 'dull',
- 'shiny', 'shiny', 'shiny'],
- 'D' : np.random.randn(11),
- 'E' : np.random.randn(11),
- 'F' : np.random.randn(11)})
+ self.three_group = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
+ 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two',
+ 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull',
+ 'dull', 'shiny', 'shiny', 'dull',
+ 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
def test_basic(self):
data = Series(np.arange(9) // 3, index=np.arange(9))
@@ -96,7 +98,7 @@ def test_basic(self):
agged = grouped.aggregate(np.mean)
self.assertEqual(agged[1], 1)
- assert_series_equal(agged, grouped.agg(np.mean)) # shorthand
+ assert_series_equal(agged, grouped.agg(np.mean)) # shorthand
assert_series_equal(agged, grouped.mean())
# Cython only returning floating point for now...
@@ -111,13 +113,13 @@ def test_basic(self):
# complex agg
agged = grouped.aggregate([np.mean, np.std])
- agged = grouped.aggregate({'one' : np.mean,
- 'two' : np.std})
+ agged = grouped.aggregate({'one': np.mean,
+ 'two': np.std})
group_constants = {
- 0 : 10,
- 1 : 20,
- 2 : 30
+ 0: 10,
+ 1: 20,
+ 2: 30
}
agged = grouped.agg(lambda x: group_constants[x.name] + x.mean())
self.assertEqual(agged[1], 21)
@@ -176,7 +178,7 @@ def test_groupby_dict_mapping(self):
assert_series_equal(result, expected)
s = Series([1., 2., 3., 4.], index=list('abcd'))
- mapping = {'a' : 0, 'b' : 0, 'c' : 1, 'd' : 1}
+ mapping = {'a': 0, 'b': 0, 'c': 1, 'd': 1}
result = s.groupby(mapping).mean()
result2 = s.groupby(mapping).agg(np.mean)
@@ -216,10 +218,10 @@ def test_agg_datetimes_mixed(self):
'date': [x[1] for x in data],
'value': [x[2] for x in data]})
- df1['weights'] = df1['value']/df1['value'].sum()
+ df1['weights'] = df1['value'] / df1['value'].sum()
gb1 = df1.groupby('date').aggregate(np.sum)
- df2['weights'] = df1['value']/df1['value'].sum()
+ df2['weights'] = df1['value'] / df1['value'].sum()
gb2 = df2.groupby('date').aggregate(np.sum)
assert(len(gb1) == len(gb2))
@@ -294,7 +296,7 @@ def test_agg_python_multiindex(self):
def test_apply_describe_bug(self):
grouped = self.mframe.groupby(level='first')
- result = grouped.describe() # it works!
+ result = grouped.describe() # it works!
def test_len(self):
df = tm.makeTimeDataFrame()
@@ -311,20 +313,21 @@ def test_len(self):
def test_groups(self):
grouped = self.df.groupby(['A'])
groups = grouped.groups
- self.assert_(groups is grouped.groups) # caching works
+ self.assert_(groups is grouped.groups) # caching works
for k, v in grouped.groups.iteritems():
self.assert_((self.df.ix[v]['A'] == k).all())
grouped = self.df.groupby(['A', 'B'])
groups = grouped.groups
- self.assert_(groups is grouped.groups) # caching works
+ self.assert_(groups is grouped.groups) # caching works
for k, v in grouped.groups.iteritems():
self.assert_((self.df.ix[v]['A'] == k[0]).all())
self.assert_((self.df.ix[v]['B'] == k[1]).all())
def test_aggregate_str_func(self):
from pandas.util.compat import OrderedDict
+
def _check_results(grouped):
# single series
result = grouped['A'].agg('std')
@@ -337,10 +340,11 @@ def _check_results(grouped):
assert_frame_equal(result, expected)
# group frame by function dict
- result = grouped.agg(OrderedDict([['A' , 'var'], ['B' , 'std'], ['C' , 'mean']]))
+ result = grouped.agg(
+ OrderedDict([['A', 'var'], ['B', 'std'], ['C', 'mean']]))
expected = DataFrame(OrderedDict([['A', grouped['A'].var()],
- ['B', grouped['B'].std()],
- ['C', grouped['C'].mean()]]))
+ ['B', grouped['B'].std()],
+ ['C', grouped['C'].mean()]]))
assert_frame_equal(result, expected)
by_weekday = self.tsframe.groupby(lambda x: x.weekday())
@@ -355,6 +359,7 @@ def test_aggregate_item_by_item(self):
df = self.df.copy()
df['E'] = ['a'] * len(self.df)
grouped = self.df.groupby('A')
+
def aggfun(ser):
return len(ser + 'a')
result = grouped.agg(aggfun)
@@ -376,7 +381,7 @@ def aggfun(ser):
def test_basic_regression(self):
# regression
- T = [1.0*x for x in range(1,10) *10][:1095]
+ T = [1.0 * x for x in range(1, 10) * 10][:1095]
result = Series(T, range(0, len(T)))
groupings = np.random.random((1100,))
@@ -415,7 +420,7 @@ def test_transform_broadcast(self):
assert_fp_equal(res[col], agged[col])
# group columns
- grouped = self.tsframe.groupby({'A' : 0, 'B' : 0, 'C' : 1, 'D' : 1},
+ grouped = self.tsframe.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1},
axis=1)
result = grouped.transform(np.mean)
self.assert_(result.index.equals(self.tsframe.index))
@@ -530,18 +535,18 @@ def test_series_agg_multikey(self):
assert_series_equal(result, expected)
def test_series_agg_multi_pure_python(self):
- data = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
- 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B' : ['one', 'one', 'one', 'two',
- 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C' : ['dull', 'dull', 'shiny', 'dull',
- 'dull', 'shiny', 'shiny', 'dull',
- 'shiny', 'shiny', 'shiny'],
- 'D' : np.random.randn(11),
- 'E' : np.random.randn(11),
- 'F' : np.random.randn(11)})
+ data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
+ 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two',
+ 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull',
+ 'dull', 'shiny', 'shiny', 'dull',
+ 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
def bad(x):
assert(len(x.base) > 0)
@@ -565,8 +570,8 @@ def test_frame_describe_multikey(self):
expected = grouped[col].describe()
assert_series_equal(result[col], expected)
- groupedT = self.tsframe.groupby({'A' : 0, 'B' : 0,
- 'C' : 1, 'D' : 1}, axis=1)
+ groupedT = self.tsframe.groupby({'A': 0, 'B': 0,
+ 'C': 1, 'D': 1}, axis=1)
result = groupedT.describe()
for name, group in groupedT:
@@ -622,7 +627,7 @@ def test_grouping_is_iterable(self):
def test_frame_groupby_columns(self):
mapping = {
- 'A' : 0, 'B' : 0, 'C' : 1, 'D' : 1
+ 'A': 0, 'B': 0, 'C': 1, 'D': 1
}
grouped = self.tsframe.groupby(mapping, axis=1)
@@ -652,7 +657,7 @@ def test_frame_set_name_single(self):
result = grouped.agg(np.mean)
self.assert_(result.index.name == 'A')
- result = grouped.agg({'C' : np.mean, 'D' : np.std})
+ result = grouped.agg({'C': np.mean, 'D': np.std})
self.assert_(result.index.name == 'A')
result = grouped['C'].mean()
@@ -662,7 +667,7 @@ def test_frame_set_name_single(self):
result = grouped['C'].agg([np.mean, np.std])
self.assert_(result.index.name == 'A')
- result = grouped['C'].agg({'foo' : np.mean, 'bar' : np.std})
+ result = grouped['C'].agg({'foo': np.mean, 'bar': np.std})
self.assert_(result.index.name == 'A')
def test_multi_iter(self):
@@ -686,9 +691,9 @@ def test_multi_iter(self):
def test_multi_iter_frame(self):
k1 = np.array(['b', 'b', 'b', 'a', 'a', 'a'])
k2 = np.array(['1', '2', '1', '2', '1', '2'])
- df = DataFrame({'v1' : np.random.randn(6),
- 'v2' : np.random.randn(6),
- 'k1' : k1, 'k2' : k2},
+ df = DataFrame({'v1': np.random.randn(6),
+ 'v2': np.random.randn(6),
+ 'k1': k1, 'k2': k2},
index=['one', 'two', 'three', 'four', 'five', 'six'])
grouped = df.groupby(['k1', 'k2'])
@@ -743,10 +748,10 @@ def test_multi_func(self):
expected.ix[:, ['C', 'D']])
# some "groups" with no data
- df = DataFrame({'v1' : np.random.randn(6),
- 'v2' : np.random.randn(6),
- 'k1' : np.array(['b', 'b', 'b', 'a', 'a', 'a']),
- 'k2' : np.array(['1', '1', '1', '2', '2', '2'])},
+ df = DataFrame({'v1': np.random.randn(6),
+ 'v2': np.random.randn(6),
+ 'k1': np.array(['b', 'b', 'b', 'a', 'a', 'a']),
+ 'k2': np.array(['1', '1', '1', '2', '2', '2'])},
index=['one', 'two', 'three', 'four', 'five', 'six'])
# only verify that it works for now
grouped = df.groupby(['k1', 'k2'])
@@ -756,23 +761,23 @@ def test_multi_key_multiple_functions(self):
grouped = self.df.groupby(['A', 'B'])['C']
agged = grouped.agg([np.mean, np.std])
- expected = DataFrame({'mean' : grouped.agg(np.mean),
- 'std' : grouped.agg(np.std)})
+ expected = DataFrame({'mean': grouped.agg(np.mean),
+ 'std': grouped.agg(np.std)})
assert_frame_equal(agged, expected)
def test_frame_multi_key_function_list(self):
- data = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
- 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B' : ['one', 'one', 'one', 'two',
- 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C' : ['dull', 'dull', 'shiny', 'dull',
- 'dull', 'shiny', 'shiny', 'dull',
- 'shiny', 'shiny', 'shiny'],
- 'D' : np.random.randn(11),
- 'E' : np.random.randn(11),
- 'F' : np.random.randn(11)})
+ data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
+ 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two',
+ 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull',
+ 'dull', 'shiny', 'shiny', 'dull',
+ 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
grouped = data.groupby(['A', 'B'])
funcs = [np.mean, np.std]
@@ -827,15 +832,15 @@ def test_groupby_as_index_agg(self):
expected = grouped.mean()
assert_frame_equal(result, expected)
- result2 = grouped.agg(OrderedDict([['C' , np.mean], ['D' , np.sum]]))
+ result2 = grouped.agg(OrderedDict([['C', np.mean], ['D', np.sum]]))
expected2 = grouped.mean()
expected2['D'] = grouped.sum()['D']
assert_frame_equal(result2, expected2)
grouped = self.df.groupby('A', as_index=True)
expected3 = grouped['C'].sum()
- expected3 = DataFrame(expected3).rename(columns={'C' : 'Q'})
- result3 = grouped['C'].agg({'Q' : np.sum})
+ expected3 = DataFrame(expected3).rename(columns={'C': 'Q'})
+ result3 = grouped['C'].agg({'Q': np.sum})
assert_frame_equal(result3, expected3)
# multi-key
@@ -846,14 +851,14 @@ def test_groupby_as_index_agg(self):
expected = grouped.mean()
assert_frame_equal(result, expected)
- result2 = grouped.agg(OrderedDict([['C' , np.mean], ['D' , np.sum]]))
+ result2 = grouped.agg(OrderedDict([['C', np.mean], ['D', np.sum]]))
expected2 = grouped.mean()
expected2['D'] = grouped.sum()['D']
assert_frame_equal(result2, expected2)
expected3 = grouped['C'].sum()
- expected3 = DataFrame(expected3).rename(columns={'C' : 'Q'})
- result3 = grouped['C'].agg({'Q' : np.sum})
+ expected3 = DataFrame(expected3).rename(columns={'C': 'Q'})
+ result3 = grouped['C'].agg({'Q': np.sum})
assert_frame_equal(result3, expected3)
def test_multifunc_select_col_integer_cols(self):
@@ -861,7 +866,7 @@ def test_multifunc_select_col_integer_cols(self):
df.columns = np.arange(len(df.columns))
# it works!
- result = df.groupby(1, as_index=False)[2].agg({'Q' : np.mean})
+ result = df.groupby(1, as_index=False)[2].agg({'Q': np.mean})
def test_as_index_series_return_frame(self):
grouped = self.df.groupby('A', as_index=False)
@@ -978,7 +983,7 @@ def test_omit_nuisance(self):
assert_frame_equal(result, expected)
# won't work with axis = 1
- grouped = df.groupby({'A' : 0, 'C' : 0, 'D' : 1, 'E' : 1}, axis=1)
+ grouped = df.groupby({'A': 0, 'C': 0, 'D': 1, 'E': 1}, axis=1)
result = self.assertRaises(TypeError, grouped.agg,
lambda x: x.sum(1, numeric_only=False))
@@ -991,11 +996,11 @@ def test_omit_nuisance_python_multiple(self):
def test_empty_groups_corner(self):
# handle empty groups
- df = DataFrame({'k1' : np.array(['b', 'b', 'b', 'a', 'a', 'a']),
- 'k2' : np.array(['1', '1', '1', '2', '2', '2']),
- 'k3' : ['foo', 'bar'] * 3,
- 'v1' : np.random.randn(6),
- 'v2' : np.random.randn(6)})
+ df = DataFrame({'k1': np.array(['b', 'b', 'b', 'a', 'a', 'a']),
+ 'k2': np.array(['1', '1', '1', '2', '2', '2']),
+ 'k3': ['foo', 'bar'] * 3,
+ 'v1': np.random.randn(6),
+ 'v2': np.random.randn(6)})
grouped = df.groupby(['k1', 'k2'])
result = grouped.agg(np.mean)
@@ -1047,9 +1052,9 @@ def test_nonsense_func(self):
self.assertRaises(Exception, df.groupby, lambda x: x + 'foo')
def test_cythonized_aggers(self):
- data = {'A' : [0, 0, 0, 0, 1, 1, 1, 1, 1, 1., nan, nan],
- 'B' : ['A', 'B'] * 6,
- 'C' : np.random.randn(12)}
+ data = {'A': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1., nan, nan],
+ 'B': ['A', 'B'] * 6,
+ 'C': np.random.randn(12)}
df = DataFrame(data)
df['C'][2:10:2] = nan
@@ -1059,7 +1064,7 @@ def _testit(op):
exp = {}
for cat, group in grouped:
exp[cat] = op(group['C'])
- exp = DataFrame({'C' : exp})
+ exp = DataFrame({'C': exp})
result = op(grouped)
assert_frame_equal(result, exp)
@@ -1097,7 +1102,7 @@ def test_cython_agg_nothing_to_agg(self):
def test_cython_agg_frame_columns(self):
# #2113
- df = DataFrame({'x': [1,2,3], 'y': [3,4,5]})
+ df = DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})
result = df.groupby(level=0, axis='columns').mean()
result = df.groupby(level=0, axis='columns').mean()
@@ -1193,9 +1198,9 @@ def test_groupby_level_mapper(self):
frame = self.mframe
deleveled = frame.reset_index()
- mapper0 = {'foo' : 0, 'bar' : 0,
- 'baz' : 1, 'qux' : 1}
- mapper1 = {'one' : 0, 'two' : 0, 'three' : 1}
+ mapper0 = {'foo': 0, 'bar': 0,
+ 'baz': 1, 'qux': 1}
+ mapper1 = {'one': 0, 'two': 0, 'three': 1}
result0 = frame.groupby(mapper0, level=0).sum()
result1 = frame.groupby(mapper1, level=1).sum()
@@ -1210,7 +1215,8 @@ def test_groupby_level_mapper(self):
def test_groupby_level_0_nonmulti(self):
# #1313
- a = Series([1,2,3,10,4,5,20,6], Index([1,2,3,1,4,5,2,6], name='foo'))
+ a = Series([1, 2, 3, 10, 4, 5, 20, 6], Index([1, 2, 3, 1,
+ 4, 5, 2, 6], name='foo'))
result = a.groupby(level=0).sum()
self.assertEquals(result.index.name, a.index.name)
@@ -1236,9 +1242,9 @@ def test_cython_fail_agg(self):
def test_apply_series_to_frame(self):
def f(piece):
- return DataFrame({'value' : piece,
- 'demeaned' : piece - piece.mean(),
- 'logged' : np.log(piece)})
+ return DataFrame({'value': piece,
+ 'demeaned': piece - piece.mean(),
+ 'logged': np.log(piece)})
dr = bdate_range('1/1/2000', periods=100)
ts = Series(np.random.randn(100), index=dr)
@@ -1315,10 +1321,10 @@ def test_apply_no_name_column_conflict(self):
grouped.apply(lambda x: x.sort('value'))
def test_groupby_series_indexed_differently(self):
- s1 = Series([5.0,-9.0,4.0,100.,-5.,55.,6.7],
- index=Index(['a','b','c','d','e','f','g']))
- s2 = Series([1.0,1.0,4.0,5.0,5.0,7.0],
- index=Index(['a','b','d','f','g','h']))
+ s1 = Series([5.0, -9.0, 4.0, 100., -5., 55., 6.7],
+ index=Index(['a', 'b', 'c', 'd', 'e', 'f', 'g']))
+ s2 = Series([1.0, 1.0, 4.0, 5.0, 5.0, 7.0],
+ index=Index(['a', 'b', 'd', 'f', 'g', 'h']))
grouped = s1.groupby(s2)
agged = grouped.mean()
@@ -1402,7 +1408,7 @@ def f(x, q=None):
# values = np.random.randn(10)
# shape = (5, 5)
# label_list = [np.array([0, 0, 0, 0, 1, 1, 1, 1, 2, 2], dtype=np.int32),
- # np.array([1, 2, 3, 4, 0, 1, 2, 3, 3, 4], dtype=np.int32)]
+ # np.array([1, 2, 3, 4, 0, 1, 2, 3, 3, 4], dtype=np.int32)]
# lib.group_aggregate(values, label_list, shape)
@@ -1430,9 +1436,9 @@ def test_grouping_ndarray(self):
assert_frame_equal(result, expected)
def test_apply_typecast_fail(self):
- df = DataFrame({'d' : [1.,1.,1.,2.,2.,2.],
- 'c' : np.tile(['a','b','c'], 2),
- 'v' : np.arange(1., 7.)})
+ df = DataFrame({'d': [1., 1., 1., 2., 2., 2.],
+ 'c': np.tile(['a', 'b', 'c'], 2),
+ 'v': np.arange(1., 7.)})
def f(group):
v = group['v']
@@ -1449,9 +1455,9 @@ def f(group):
def test_apply_multiindex_fail(self):
index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1],
[1, 2, 3, 1, 2, 3]])
- df = DataFrame({'d' : [1.,1.,1.,2.,2.,2.],
- 'c' : np.tile(['a','b','c'], 2),
- 'v' : np.arange(1., 7.)}, index=index)
+ df = DataFrame({'d': [1., 1., 1., 2., 2., 2.],
+ 'c': np.tile(['a', 'b', 'c'], 2),
+ 'v': np.arange(1., 7.)}, index=index)
def f(group):
v = group['v']
@@ -1502,9 +1508,9 @@ def f(g):
def test_transform_mixed_type(self):
index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1],
[1, 2, 3, 1, 2, 3]])
- df = DataFrame({'d' : [1.,1.,1.,2.,2.,2.],
- 'c' : np.tile(['a','b','c'], 2),
- 'v' : np.arange(1., 7.)}, index=index)
+ df = DataFrame({'d': [1., 1., 1., 2., 2., 2.],
+ 'c': np.tile(['a', 'b', 'c'], 2),
+ 'v': np.arange(1., 7.)}, index=index)
def f(group):
group['g'] = group['d'] * 2
@@ -1544,7 +1550,7 @@ def test_groupby_series_with_name(self):
result = self.df.groupby([self.df['A'], self.df['B']]).mean()
result2 = self.df.groupby([self.df['A'], self.df['B']],
- as_index=False).mean()
+ as_index=False).mean()
self.assertEquals(result.index.names, ['A', 'B'])
self.assert_('A' in result2)
self.assert_('B' in result2)
@@ -1611,8 +1617,8 @@ def test_groupby_list_infer_array_like(self):
self.assertRaises(Exception, self.df.groupby, list(self.df['A'][:-1]))
# pathological case of ambiguity
- df = DataFrame({'foo' : [0, 1], 'bar' : [3, 4],
- 'val' : np.random.randn(2)})
+ df = DataFrame({'foo': [0, 1], 'bar': [3, 4],
+ 'val': np.random.randn(2)})
result = df.groupby(['foo', 'bar']).mean()
expected = df.groupby([df['foo'], df['bar']]).mean()[['val']]
@@ -1646,7 +1652,7 @@ def _check_work(gp):
def test_panel_groupby(self):
self.panel = tm.makePanel()
tm.add_nans(self.panel)
- grouped = self.panel.groupby({'ItemA' : 0, 'ItemB' : 0, 'ItemC' : 1},
+ grouped = self.panel.groupby({'ItemA': 0, 'ItemB': 0, 'ItemC': 1},
axis='items')
agged = grouped.mean()
agged2 = grouped.agg(lambda x: x.mean('items'))
@@ -1660,7 +1666,7 @@ def test_panel_groupby(self):
self.assert_(np.array_equal(agged.major_axis, [1, 2]))
- grouped = self.panel.groupby({'A' : 0, 'B' : 0, 'C' : 1, 'D' : 1},
+ grouped = self.panel.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1},
axis='minor')
agged = grouped.mean()
self.assert_(np.array_equal(agged.minor_axis, [0, 1]))
@@ -1696,9 +1702,9 @@ def test_int32_overflow(self):
B = np.concatenate((np.arange(10000), np.arange(10000),
np.arange(5000)))
A = np.arange(25000)
- df = DataFrame({'A' : A, 'B' : B,
- 'C' : A, 'D' : B,
- 'E' : np.random.randn(25000)})
+ df = DataFrame({'A': A, 'B': B,
+ 'C': A, 'D': B,
+ 'E': np.random.randn(25000)})
left = df.groupby(['A', 'B', 'C', 'D']).sum()
right = df.groupby(['D', 'C', 'B', 'A']).sum()
@@ -1708,11 +1714,11 @@ def test_int64_overflow(self):
B = np.concatenate((np.arange(1000), np.arange(1000),
np.arange(500)))
A = np.arange(2500)
- df = DataFrame({'A' : A, 'B' : B,
- 'C' : A, 'D' : B,
- 'E' : A, 'F' : B,
- 'G' : A, 'H' : B,
- 'values' : np.random.randn(2500)})
+ df = DataFrame({'A': A, 'B': B,
+ 'C': A, 'D': B,
+ 'E': A, 'F': B,
+ 'G': A, 'H': B,
+ 'values': np.random.randn(2500)})
lg = df.groupby(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'])
rg = df.groupby(['H', 'G', 'F', 'E', 'D', 'C', 'B', 'A'])
@@ -1736,10 +1742,10 @@ def test_int64_overflow(self):
self.assert_(len(left) == len(right))
def test_groupby_sort_multi(self):
- df = DataFrame({'a' : ['foo', 'bar', 'baz'],
- 'b' : [3, 2, 1],
- 'c' : [0, 1, 2],
- 'd' : np.random.randn(3)})
+ df = DataFrame({'a': ['foo', 'bar', 'baz'],
+ 'b': [3, 2, 1],
+ 'c': [0, 1, 2],
+ 'd': np.random.randn(3)})
tups = map(tuple, df[['a', 'b', 'c']].values)
tups = com._asarray_tuplesafe(tups)
@@ -1758,9 +1764,9 @@ def test_groupby_sort_multi(self):
self.assert_(np.array_equal(result.index.values,
tups[[2, 1, 0]]))
- df = DataFrame({'a' : [0, 1, 2, 0, 1, 2],
- 'b' : [0, 0, 0, 1, 1, 1],
- 'd' : np.random.randn(6)})
+ df = DataFrame({'a': [0, 1, 2, 0, 1, 2],
+ 'b': [0, 0, 0, 1, 1, 1],
+ 'd': np.random.randn(6)})
grouped = df.groupby(['a', 'b'])['d']
result = grouped.sum()
_check_groupby(df, result, ['a', 'b'], 'd')
@@ -1792,9 +1798,9 @@ def test_rank_apply(self):
lab1 = np.random.randint(0, 100, size=500)
lab2 = np.random.randint(0, 130, size=500)
- df = DataFrame({'value' : np.random.randn(500),
- 'key1' : lev1.take(lab1),
- 'key2' : lev2.take(lab2)})
+ df = DataFrame({'value': np.random.randn(500),
+ 'key1': lev1.take(lab1),
+ 'key2': lev2.take(lab2)})
result = df.groupby(['key1', 'key2']).value.rank()
@@ -1808,7 +1814,7 @@ def test_rank_apply(self):
def test_dont_clobber_name_column(self):
df = DataFrame({'key': ['a', 'a', 'a', 'b', 'b', 'b'],
- 'name' : ['foo', 'bar', 'baz'] * 2})
+ 'name': ['foo', 'bar', 'baz'] * 2})
result = df.groupby('key').apply(lambda x: x)
assert_frame_equal(result, df)
@@ -1848,6 +1854,7 @@ def test_no_nonsense_name(self):
def test_wrap_agg_out(self):
grouped = self.three_group.groupby(['A', 'B'])
+
def func(ser):
if ser.dtype == np.object:
raise TypeError
@@ -1860,12 +1867,12 @@ def func(ser):
def test_multifunc_sum_bug(self):
# GH #1065
- x = DataFrame(np.arange(9).reshape(3,3))
- x['test']=0
- x['fl']= [1.3,1.5,1.6]
+ x = DataFrame(np.arange(9).reshape(3, 3))
+ x['test'] = 0
+ x['fl'] = [1.3, 1.5, 1.6]
grouped = x.groupby('test')
- result = grouped.agg({'fl':'sum',2:'size'})
+ result = grouped.agg({'fl': 'sum', 2: 'size'})
self.assert_(result['fl'].dtype == np.float64)
def test_handle_dict_return_value(self):
@@ -1934,33 +1941,35 @@ def test_more_flexible_frame_multi_function(self):
grouped = self.df.groupby('A')
- exmean = grouped.agg(OrderedDict([['C' , np.mean], ['D' , np.mean]]))
- exstd = grouped.agg(OrderedDict([['C' , np.std], ['D' , np.std]]))
+ exmean = grouped.agg(OrderedDict([['C', np.mean], ['D', np.mean]]))
+ exstd = grouped.agg(OrderedDict([['C', np.std], ['D', np.std]]))
expected = concat([exmean, exstd], keys=['mean', 'std'], axis=1)
expected = expected.swaplevel(0, 1, axis=1).sortlevel(0, axis=1)
- d=OrderedDict([['C',[np.mean, np.std]],['D',[np.mean, np.std]]])
+ d = OrderedDict([['C', [np.mean, np.std]], ['D', [np.mean, np.std]]])
result = grouped.aggregate(d)
assert_frame_equal(result, expected)
# be careful
- result = grouped.aggregate(OrderedDict([['C' , np.mean],
- [ 'D' , [np.mean, np.std]]]))
- expected = grouped.aggregate(OrderedDict([['C' , np.mean],
- [ 'D' , [np.mean, np.std]]]))
+ result = grouped.aggregate(OrderedDict([['C', np.mean],
+ ['D', [np.mean, np.std]]]))
+ expected = grouped.aggregate(OrderedDict([['C', np.mean],
+ ['D', [np.mean, np.std]]]))
assert_frame_equal(result, expected)
+ def foo(x):
+ return np.mean(x)
- def foo(x): return np.mean(x)
- def bar(x): return np.std(x, ddof=1)
- d=OrderedDict([['C' , np.mean],
- ['D', OrderedDict([['foo', np.mean],
- ['bar', np.std]])]])
+ def bar(x):
+ return np.std(x, ddof=1)
+ d = OrderedDict([['C', np.mean],
+ ['D', OrderedDict([['foo', np.mean],
+ ['bar', np.std]])]])
result = grouped.aggregate(d)
- d = OrderedDict([['C' , [np.mean]],['D' , [foo, bar]]])
+ d = OrderedDict([['C', [np.mean]], ['D', [foo, bar]]])
expected = grouped.aggregate(d)
assert_frame_equal(result, expected)
@@ -1970,18 +1979,21 @@ def test_multi_function_flexible_mix(self):
from pandas.util.compat import OrderedDict
grouped = self.df.groupby('A')
- d = OrderedDict([['C' , OrderedDict([['foo' , 'mean'],
- ['bar' , 'std']])],
- ['D' , 'sum']])
+ d = OrderedDict([['C', OrderedDict([['foo', 'mean'],
+ [
+ 'bar', 'std']])],
+ ['D', 'sum']])
result = grouped.aggregate(d)
- d2 = OrderedDict([['C' , OrderedDict([['foo' , 'mean'],
- ['bar' , 'std']])],
- ['D' ,[ 'sum']]])
+ d2 = OrderedDict([['C', OrderedDict([['foo', 'mean'],
+ [
+ 'bar', 'std']])],
+ ['D', ['sum']]])
result2 = grouped.aggregate(d2)
- d3 = OrderedDict([['C' , OrderedDict([['foo' , 'mean'],
- ['bar' , 'std']])],
- ['D' , {'sum':'sum'}]])
+ d3 = OrderedDict([['C', OrderedDict([['foo', 'mean'],
+ [
+ 'bar', 'std']])],
+ ['D', {'sum': 'sum'}]])
expected = grouped.aggregate(d3)
assert_frame_equal(result, expected)
@@ -2067,7 +2079,8 @@ def test_groupby_reindex_inside_function(self):
periods = 1000
ind = DatetimeIndex(start='2012/1/1', freq='5min', periods=periods)
- df = DataFrame({'high': np.arange(periods), 'low': np.arange(periods)}, index=ind)
+ df = DataFrame({'high': np.arange(
+ periods), 'low': np.arange(periods)}, index=ind)
def agg_before(hour, func, fix=False):
"""
@@ -2175,6 +2188,7 @@ def _check_groupby(df, result, keys, field, f=lambda x: x.sum()):
for k, v in expected.iteritems():
assert(result[k] == v)
+
def test_decons():
from pandas.core.groupby import decons_group_index, get_group_index
@@ -2199,5 +2213,6 @@ def testit(label_list, shape):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure', '-s'],
- exit=False)
+ nose.runmodule(
+ argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', '-s'],
+ exit=False)
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index cac895ed31b37..0bf41b485d278 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -24,8 +24,10 @@
import pandas as pd
from pandas.lib import Timestamp
+
class TestIndex(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.unicodeIndex = tm.makeUnicodeIndex(100)
self.strIndex = tm.makeStringIndex(100)
@@ -81,7 +83,6 @@ def test_constructor(self):
# arr = np.array(5.)
# self.assertRaises(Exception, arr.view, Index)
-
def test_constructor_corner(self):
# corner case
self.assertRaises(Exception, Index, 0)
@@ -182,7 +183,7 @@ def test_booleanindex(self):
self.assertEqual(subIndex.get_loc(val), i)
def test_fancy(self):
- sl = self.strIndex[[1,2,3]]
+ sl = self.strIndex[[1, 2, 3]]
for i in sl:
self.assertEqual(i, sl[sl.get_loc(i)])
@@ -447,19 +448,20 @@ def test_drop(self):
expected = self.strIndex[1:]
self.assert_(dropped.equals(expected))
- ser = Index([1,2,3])
+ ser = Index([1, 2, 3])
dropped = ser.drop(1)
- expected = Index([2,3])
+ expected = Index([2, 3])
self.assert_(dropped.equals(expected))
def test_tuple_union_bug(self):
import pandas
import numpy as np
- aidx1 = np.array([(1, 'A'),(2, 'A'),(1, 'B'),(2, 'B')], dtype=[('num',
- int),('let', 'a1')])
- aidx2 = np.array([(1, 'A'),(2, 'A'),(1, 'B'),(2, 'B'),(1,'C'),(2,
- 'C')], dtype=[('num', int),('let', 'a1')])
+ aidx1 = np.array(
+ [(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')], dtype=[('num',
+ int), ('let', 'a1')])
+ aidx2 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B'), (1, 'C'), (2,
+ 'C')], dtype=[('num', int), ('let', 'a1')])
idx1 = pandas.Index(aidx1)
idx2 = pandas.Index(aidx2)
@@ -467,7 +469,7 @@ def test_tuple_union_bug(self):
# intersection broken?
int_idx = idx1.intersection(idx2)
# needs to be 1d like idx1 and idx2
- expected = idx1[:4] # pandas.Index(sorted(set(idx1) & set(idx2)))
+ expected = idx1[:4] # pandas.Index(sorted(set(idx1) & set(idx2)))
self.assert_(int_idx.ndim == 1)
self.assert_(int_idx.equals(expected))
@@ -506,7 +508,7 @@ def test_isin(self):
self.assert_(result.dtype == np.bool_)
def test_boolean_cmp(self):
- values = [1,2,3,4]
+ values = [1, 2, 3, 4]
idx = Index(values)
res = (idx == values)
@@ -515,8 +517,10 @@ def test_boolean_cmp(self):
self.assert_(res.dtype == 'bool')
self.assert_(not isinstance(res, Index))
+
class TestInt64Index(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.index = Int64Index(np.arange(0, 20, 2))
@@ -804,7 +808,7 @@ def test_intersection(self):
self.assert_(np.array_equal(result, expected))
def test_intersect_str_dates(self):
- dt_dates = [datetime(2012,2,9) , datetime(2012,2,22)]
+ dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)]
i1 = Index(dt_dates, dtype=object)
i2 = Index(['aa'], dtype=object)
@@ -842,8 +846,8 @@ def test_prevent_casting(self):
self.assert_(result.dtype == np.object_)
def test_take_preserve_name(self):
- index = Int64Index([1,2,3,4], name='foo')
- taken = index.take([3,0,1])
+ index = Int64Index([1, 2, 3, 4], name='foo')
+ taken = index.take([3, 0, 1])
self.assertEqual(index.name, taken.name)
def test_int_name_format(self):
@@ -855,13 +859,14 @@ def test_int_name_format(self):
repr(df)
def test_print_unicode_columns(self):
- df=pd.DataFrame({u"\u05d0":[1,2,3],"\u05d1":[4,5,6],"c":[7,8,9]})
- repr(df.columns) # should not raise UnicodeDecodeError
+ df = pd.DataFrame(
+ {u"\u05d0": [1, 2, 3], "\u05d1": [4, 5, 6], "c": [7, 8, 9]})
+ repr(df.columns) # should not raise UnicodeDecodeError
def test_repr_summary(self):
r = repr(pd.Index(np.arange(10000)))
self.assertTrue(len(r) < 100)
- self.assertTrue( "..." in r)
+ self.assertTrue("..." in r)
def test_unicode_string_with_unicode(self):
idx = Index(range(1000))
@@ -878,8 +883,10 @@ def test_bytestring_with_unicode(self):
else:
str(idx)
+
class TestMultiIndex(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
major_axis = Index(['foo', 'bar', 'baz', 'qux'])
minor_axis = Index(['one', 'two'])
@@ -1078,7 +1085,7 @@ def test_getitem(self):
# slice
result = self.index[2:5]
- expected = self.index[[2,3,4]]
+ expected = self.index[[2, 3, 4]]
self.assert_(result.equals(expected))
# boolean
@@ -1277,7 +1284,7 @@ def test_get_indexer(self):
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
idx1 = index[:5]
- idx2 = index[[1,3,5]]
+ idx2 = index[[1, 3, 5]]
r1 = idx1.get_indexer(idx2)
assert_almost_equal(r1, [1, 3, -1])
@@ -1299,8 +1306,8 @@ def test_get_indexer(self):
rexp1 = idx1.get_indexer(idx2)
assert_almost_equal(r1, rexp1)
- r1 = idx1.get_indexer([1,2,3])
- self.assert_( (r1 == [-1, -1, -1]).all() )
+ r1 = idx1.get_indexer([1, 2, 3])
+ self.assert_((r1 == [-1, -1, -1]).all())
# self.assertRaises(Exception, idx1.get_indexer,
# list(list(zip(*idx2._tuple_index))[0]))
@@ -1491,7 +1498,7 @@ def test_diff(self):
def test_from_tuples(self):
self.assertRaises(Exception, MultiIndex.from_tuples, [])
- idx = MultiIndex.from_tuples( ((1,2),(3,4)), names=['a', 'b'] )
+ idx = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b'])
self.assertEquals(len(idx), 2)
def test_argsort(self):
@@ -1543,7 +1550,6 @@ def test_sortlevel_deterministic(self):
sorted_idx, _ = index.sortlevel(1, ascending=False)
self.assert_(sorted_idx.equals(expected[::-1]))
-
def test_dims(self):
pass
@@ -1620,7 +1626,7 @@ def test_insert(self):
self.assertRaises(Exception, self.index.insert, 0, ('foo2',))
def test_take_preserve_name(self):
- taken = self.index.take([3,0,1])
+ taken = self.index.take([3, 0, 1])
self.assertEqual(taken.names, self.index.names)
def test_join_level(self):
@@ -1634,7 +1640,8 @@ def _check_how(other, how):
self.assert_(join_index.levels[1].equals(exp_level))
# pare down levels
- mask = np.array([x[1] in exp_level for x in self.index], dtype=bool)
+ mask = np.array(
+ [x[1] in exp_level for x in self.index], dtype=bool)
exp_values = self.index.values[mask]
self.assert_(np.array_equal(join_index.values, exp_values))
@@ -1711,13 +1718,13 @@ def test_tolist(self):
self.assertEqual(result, exp)
def test_repr_with_unicode_data(self):
- d={"a":[u"\u05d0",2,3],"b":[4,5,6],"c":[7,8,9]}
- index=pd.DataFrame(d).set_index(["a","b"]).index
- self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped
+ d = {"a": [u"\u05d0", 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
+ index = pd.DataFrame(d).set_index(["a", "b"]).index
+ self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped
def test_unicode_string_with_unicode(self):
- d={"a":[u"\u05d0",2,3],"b":[4,5,6],"c":[7,8,9]}
- idx=pd.DataFrame(d).set_index(["a","b"]).index
+ d = {"a": [u"\u05d0", 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
+ idx = pd.DataFrame(d).set_index(["a", "b"]).index
if py3compat.PY3:
str(idx)
@@ -1725,14 +1732,15 @@ def test_unicode_string_with_unicode(self):
unicode(idx)
def test_bytestring_with_unicode(self):
- d={"a":[u"\u05d0",2,3],"b":[4,5,6],"c":[7,8,9]}
- idx=pd.DataFrame(d).set_index(["a","b"]).index
+ d = {"a": [u"\u05d0", 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
+ idx = pd.DataFrame(d).set_index(["a", "b"]).index
if py3compat.PY3:
bytes(idx)
else:
str(idx)
+
def test_get_combined_index():
from pandas.core.index import _get_combined_index
result = _get_combined_index([])
@@ -1740,6 +1748,6 @@ def test_get_combined_index():
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
- # '--with-coverage', '--cover-package=pandas.core'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ # '--with-coverage', '--cover-package=pandas.core'],
exit=False)
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 3d9448ecdd7c5..9deddb802d1bf 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -9,7 +9,9 @@
import pandas.core.internals as internals
import pandas.util.testing as tm
-from pandas.util.testing import (assert_almost_equal, assert_frame_equal, randn)
+from pandas.util.testing import (
+ assert_almost_equal, assert_frame_equal, randn)
+
def assert_block_equal(left, right):
assert_almost_equal(left.values, right.values)
@@ -17,42 +19,51 @@ def assert_block_equal(left, right):
assert(left.items.equals(right.items))
assert(left.ref_items.equals(right.ref_items))
+
def get_float_mat(n, k):
return np.repeat(np.atleast_2d(np.arange(k, dtype=float)), n, axis=0)
TEST_COLS = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
N = 10
+
def get_float_ex(cols=['a', 'c', 'e']):
floats = get_float_mat(N, len(cols)).T
return make_block(floats, cols, TEST_COLS)
+
def get_complex_ex(cols=['h']):
complexes = (get_float_mat(N, 1).T * 1j).astype(np.complex128)
return make_block(complexes, cols, TEST_COLS)
+
def get_obj_ex(cols=['b', 'd']):
mat = np.empty((N, 2), dtype=object)
mat[:, 0] = 'foo'
mat[:, 1] = 'bar'
return make_block(mat.T, cols, TEST_COLS)
+
def get_bool_ex(cols=['f']):
mat = np.ones((N, 1), dtype=bool)
return make_block(mat.T, cols, TEST_COLS)
+
def get_int_ex(cols=['g']):
mat = randn(N, 1).astype(int)
return make_block(mat.T, cols, TEST_COLS)
+
def get_int32_ex(cols):
mat = randn(N, 1).astype(np.int32)
return make_block(mat.T, cols, TEST_COLS)
+
def get_dt_ex(cols=['h']):
mat = randn(N, 1).astype(int).astype('M8[ns]')
return make_block(mat.T, cols, TEST_COLS)
+
class TestBlock(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -158,21 +169,21 @@ def test_delete(self):
def test_split_block_at(self):
bs = list(self.fblock.split_block_at('a'))
- self.assertEqual(len(bs),1)
+ self.assertEqual(len(bs), 1)
self.assertTrue(np.array_equal(bs[0].items, ['c', 'e']))
bs = list(self.fblock.split_block_at('c'))
- self.assertEqual(len(bs),2)
+ self.assertEqual(len(bs), 2)
self.assertTrue(np.array_equal(bs[0].items, ['a']))
self.assertTrue(np.array_equal(bs[1].items, ['e']))
bs = list(self.fblock.split_block_at('e'))
- self.assertEqual(len(bs),1)
+ self.assertEqual(len(bs), 1)
self.assertTrue(np.array_equal(bs[0].items, ['a', 'c']))
bblock = get_bool_ex(['f'])
bs = list(bblock.split_block_at('f'))
- self.assertEqual(len(bs),0)
+ self.assertEqual(len(bs), 0)
def test_unicode_repr(self):
mat = np.empty((N, 2), dtype=object)
@@ -229,7 +240,7 @@ def test_is_mixed_dtype(self):
for b in blocks:
b.ref_items = items
- mgr = BlockManager(blocks, [items, np.arange(N)])
+ mgr = BlockManager(blocks, [items, np.arange(N)])
self.assert_(not mgr.is_mixed_dtype())
def test_is_indexed_like(self):
@@ -273,8 +284,8 @@ def test_pickle(self):
self.assert_(mgr2.blocks[0].ref_items is mgr2.blocks[1].ref_items)
# GH2431
- self.assertTrue(hasattr(mgr2,"_is_consolidated"))
- self.assertTrue(hasattr(mgr2,"_known_consolidated"))
+ self.assertTrue(hasattr(mgr2, "_is_consolidated"))
+ self.assertTrue(hasattr(mgr2, "_known_consolidated"))
# reset to False on load
self.assertFalse(mgr2._is_consolidated)
@@ -403,17 +414,17 @@ def test_get_numeric_data(self):
bool_ser = Series(np.array([True, False, True]))
obj_ser = Series(np.array([1, 'a', 5]))
dt_ser = Series(tm.makeDateIndex(3))
- #check types
- df = DataFrame({'int' : int_ser, 'float' : float_ser,
- 'complex' : complex_ser, 'str' : str_ser,
- 'bool' : bool_ser, 'obj' : obj_ser,
- 'dt' : dt_ser})
- xp = DataFrame({'int' : int_ser, 'float' : float_ser,
- 'complex' : complex_ser})
+ # check types
+ df = DataFrame({'int': int_ser, 'float': float_ser,
+ 'complex': complex_ser, 'str': str_ser,
+ 'bool': bool_ser, 'obj': obj_ser,
+ 'dt': dt_ser})
+ xp = DataFrame({'int': int_ser, 'float': float_ser,
+ 'complex': complex_ser})
rs = DataFrame(df._data.get_numeric_data())
assert_frame_equal(xp, rs)
- xp = DataFrame({'bool' : bool_ser})
+ xp = DataFrame({'bool': bool_ser})
rs = DataFrame(df._data.get_numeric_data(type_list=bool))
assert_frame_equal(xp, rs)
@@ -428,16 +439,16 @@ def test_get_numeric_data(self):
self.assertEqual(rs.ix[0, 'bool'], not df.ix[0, 'bool'])
def test_missing_unicode_key(self):
- df=DataFrame({"a":[1]})
+ df = DataFrame({"a": [1]})
try:
- df.ix[:,u"\u05d0"] # should not raise UnicodeEncodeError
+ df.ix[:, u"\u05d0"] # should not raise UnicodeEncodeError
except KeyError:
- pass # this is the expected exception
+ pass # this is the expected exception
if __name__ == '__main__':
# unittest.main()
import nose
# nose.runmodule(argv=[__file__,'-vvs','-x', '--pdb-failure'],
# exit=False)
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index d1b139e034cdf..5b2f8fc259c09 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -19,6 +19,7 @@
import pandas.index as _index
+
class TestMultiLevel(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -121,7 +122,8 @@ def _check_op(opname):
# Series
op = getattr(Series, opname)
result = op(self.ymd['A'], month_sums['A'], level='month')
- broadcasted = self.ymd['A'].groupby(level='month').transform(np.sum)
+ broadcasted = self.ymd['A'].groupby(
+ level='month').transform(np.sum)
expected = op(self.ymd['A'], broadcasted)
assert_series_equal(result, expected)
@@ -132,6 +134,7 @@ def _check_op(opname):
def test_pickle(self):
import cPickle
+
def _test_roundtrip(frame):
pickled = cPickle.dumps(frame)
unpickled = cPickle.loads(pickled)
@@ -256,18 +259,18 @@ def test_frame_getitem_setitem_slice(self):
self.assert_((cp.values[4:] != 0).all())
def test_frame_getitem_setitem_multislice(self):
- levels = [['t1', 't2'], ['a','b','c']]
- labels = [[0,0,0,1,1], [0,1,2,0,1]]
+ levels = [['t1', 't2'], ['a', 'b', 'c']]
+ labels = [[0, 0, 0, 1, 1], [0, 1, 2, 0, 1]]
midx = MultiIndex(labels=labels, levels=levels, names=[None, 'id'])
- df = DataFrame({'value':[1,2,3,7,8]}, index=midx)
+ df = DataFrame({'value': [1, 2, 3, 7, 8]}, index=midx)
- result = df.ix[:,'value']
+ result = df.ix[:, 'value']
assert_series_equal(df['value'], result)
- result = df.ix[1:3,'value']
+ result = df.ix[1:3, 'value']
assert_series_equal(df['value'][1:3], result)
- result = df.ix[:,:]
+ result = df.ix[:, :]
assert_frame_equal(df, result)
result = df
@@ -275,18 +278,18 @@ def test_frame_getitem_setitem_multislice(self):
result['value'] = 10
assert_frame_equal(df, result)
- df.ix[:,:] = 10
+ df.ix[:, :] = 10
assert_frame_equal(df, result)
def test_frame_getitem_multicolumn_empty_level(self):
- f = DataFrame({'a': ['1','2','3'],
- 'b': ['2','3','4']})
+ f = DataFrame({'a': ['1', '2', '3'],
+ 'b': ['2', '3', '4']})
f.columns = [['level1 item1', 'level1 item2'],
['', 'level2 item2'],
['level3 item1', 'level3 item2']]
result = f['level1 item1']
- expected = DataFrame([['1'],['2'],['3']], index=f.index,
+ expected = DataFrame([['1'], ['2'], ['3']], index=f.index,
columns=['level3 item1'])
assert_frame_equal(result, expected)
@@ -309,7 +312,7 @@ def test_frame_setitem_multi_column(self):
df = DataFrame(index=[1, 3, 5], columns=columns)
# Works, but adds a column instead of updating the two existing ones
- df['A'] = 0.0 # Doesn't work
+ df['A'] = 0.0 # Doesn't work
self.assertTrue((df['A'].values == 0).all())
# it broadcasts
@@ -320,10 +323,10 @@ def test_frame_setitem_multi_column(self):
def test_getitem_tuple_plus_slice(self):
# GH #671
- df = DataFrame({'a' : range(10),
- 'b' : range(10),
- 'c' : np.random.randn(10),
- 'd' : np.random.randn(10)})
+ df = DataFrame({'a': range(10),
+ 'b': range(10),
+ 'c': np.random.randn(10),
+ 'd': np.random.randn(10)})
idf = df.set_index(['a', 'b'])
@@ -345,7 +348,7 @@ def test_getitem_setitem_tuple_plus_columns(self):
def test_getitem_multilevel_index_tuple_unsorted(self):
index_columns = list("abc")
- df = DataFrame([[0, 1, 0, "x"], [0, 0, 1, "y"]] ,
+ df = DataFrame([[0, 1, 0, "x"], [0, 0, 1, "y"]],
columns=index_columns + ["data"])
df = df.set_index(index_columns)
query_index = df.index[:1]
@@ -413,7 +416,7 @@ def test_xs_level_multiple(self):
expected = df.xs('a').xs(4, level='four')
assert_frame_equal(result, expected)
- #GH2107
+ # GH2107
dates = range(20111201, 20111205)
ids = 'abcde'
idx = MultiIndex.from_tuples([x for x in cart_product(dates, ids)])
@@ -489,8 +492,8 @@ def test_getitem_setitem_slice_integers(self):
labels=[[0, 0, 1, 1, 2, 2],
[0, 1, 0, 1, 0, 1]])
- frame = DataFrame(np.random.randn(len(index), 4), index=index,
- columns=['a', 'b', 'c', 'd'])
+ frame = DataFrame(np.random.randn(len(index), 4), index=index,
+ columns=['a', 'b', 'c', 'd'])
res = frame.ix[1:2]
exp = frame.reindex(frame.index[2:])
assert_frame_equal(res, exp)
@@ -498,7 +501,7 @@ def test_getitem_setitem_slice_integers(self):
frame.ix[1:2] = 7
self.assert_((frame.ix[1:2] == 7).values.all())
- series = Series(np.random.randn(len(index)), index=index)
+ series = Series(np.random.randn(len(index)), index=index)
res = series.ix[1:2]
exp = series.reindex(series.index[2:])
@@ -568,15 +571,15 @@ def test_fancy_slice_partial(self):
expected = self.frame[3:7]
assert_frame_equal(result, expected)
- result = self.ymd.ix[(2000,2):(2000,4)]
+ result = self.ymd.ix[(2000, 2):(2000, 4)]
lev = self.ymd.index.labels[1]
expected = self.ymd[(lev >= 1) & (lev <= 3)]
assert_frame_equal(result, expected)
def test_getitem_partial_column_select(self):
- idx = MultiIndex(labels=[[0,0,0],[0,1,1],[1,0,1]],
- levels=[['a','b'],['x','y'],['p','q']])
- df = DataFrame(np.random.rand(3,2),index=idx)
+ idx = MultiIndex(labels=[[0, 0, 0], [0, 1, 1], [1, 0, 1]],
+ levels=[['a', 'b'], ['x', 'y'], ['p', 'q']])
+ df = DataFrame(np.random.rand(3, 2), index=idx)
result = df.ix[('a', 'y'), :]
expected = df.ix[('a', 'y')]
@@ -604,32 +607,32 @@ def test_sortlevel(self):
# preserve names
self.assertEquals(a_sorted.index.names, self.frame.index.names)
- #inplace
+ # inplace
rs = self.frame.copy()
rs.sortlevel(0, inplace=True)
assert_frame_equal(rs, self.frame.sortlevel(0))
-
def test_delevel_infer_dtype(self):
tuples = [tuple for tuple in cart_product(['foo', 'bar'],
[10, 20], [1.0, 1.1])]
index = MultiIndex.from_tuples(tuples,
names=['prm0', 'prm1', 'prm2'])
- df = DataFrame(np.random.randn(8,3), columns=['A', 'B', 'C'],
+ df = DataFrame(np.random.randn(8, 3), columns=['A', 'B', 'C'],
index=index)
deleveled = df.reset_index()
self.assert_(com.is_integer_dtype(deleveled['prm1']))
self.assert_(com.is_float_dtype(deleveled['prm2']))
def test_reset_index_with_drop(self):
- deleveled = self.ymd.reset_index(drop = True)
+ deleveled = self.ymd.reset_index(drop=True)
self.assertEquals(len(deleveled.columns), len(self.ymd.columns))
deleveled = self.series.reset_index()
self.assert_(isinstance(deleveled, DataFrame))
- self.assert_(len(deleveled.columns) == len(self.series.index.levels)+1)
+ self.assert_(
+ len(deleveled.columns) == len(self.series.index.levels) + 1)
- deleveled = self.series.reset_index(drop = True)
+ deleveled = self.series.reset_index(drop=True)
self.assert_(isinstance(deleveled, Series))
def test_sortlevel_by_name(self):
@@ -814,14 +817,14 @@ def test_stack_mixed_dtype(self):
self.assert_(stacked['bar'].dtype == np.float_)
def test_unstack_bug(self):
- df = DataFrame({'state': ['naive','naive','naive',
- 'activ','activ','activ'],
- 'exp':['a','b','b','b','a','a'],
- 'barcode':[1,2,3,4,1,3],
- 'v':['hi','hi','bye','bye','bye','peace'],
+ df = DataFrame({'state': ['naive', 'naive', 'naive',
+ 'activ', 'activ', 'activ'],
+ 'exp': ['a', 'b', 'b', 'b', 'a', 'a'],
+ 'barcode': [1, 2, 3, 4, 1, 3],
+ 'v': ['hi', 'hi', 'bye', 'bye', 'bye', 'peace'],
'extra': np.arange(6.)})
- result = df.groupby(['state','exp','barcode','v']).apply(len)
+ result = df.groupby(['state', 'exp', 'barcode', 'v']).apply(len)
unstacked = result.unstack()
restacked = unstacked.stack()
@@ -894,12 +897,12 @@ def test_unstack_sparse_keyspace(self):
# Generate Long File & Test Pivot
NUM_ROWS = 1000
- df = DataFrame({'A' : np.random.randint(100, size=NUM_ROWS),
- 'B' : np.random.randint(300, size=NUM_ROWS),
- 'C' : np.random.randint(-7, 7, size=NUM_ROWS),
- 'D' : np.random.randint(-19,19, size=NUM_ROWS),
- 'E' : np.random.randint(3000, size=NUM_ROWS),
- 'F' : np.random.randn(NUM_ROWS)})
+ df = DataFrame({'A': np.random.randint(100, size=NUM_ROWS),
+ 'B': np.random.randint(300, size=NUM_ROWS),
+ 'C': np.random.randint(-7, 7, size=NUM_ROWS),
+ 'D': np.random.randint(-19, 19, size=NUM_ROWS),
+ 'E': np.random.randint(3000, size=NUM_ROWS),
+ 'F': np.random.randn(NUM_ROWS)})
idf = df.set_index(['A', 'B', 'C', 'D', 'E'])
@@ -922,19 +925,20 @@ def test_unstack_unobserved_keys(self):
assert_frame_equal(recons, df)
def test_groupby_corner(self):
- midx = MultiIndex(levels=[['foo'],['bar'],['baz']],
- labels=[[0],[0],[0]], names=['one','two','three'])
- df = DataFrame([np.random.rand(4)], columns=['a','b','c','d'],
+ midx = MultiIndex(levels=[['foo'], ['bar'], ['baz']],
+ labels=[[0], [0], [0]], names=['one', 'two', 'three'])
+ df = DataFrame([np.random.rand(4)], columns=['a', 'b', 'c', 'd'],
index=midx)
# should work
df.groupby(level='three')
def test_groupby_level_no_obs(self):
# #1697
- midx = MultiIndex.from_tuples([('f1', 's1'),('f1','s2'),
- ('f2', 's1'),('f2', 's2'),
- ('f3', 's1'),('f3','s2')])
- df = DataFrame([[1,2,3,4,5,6],[7,8,9,10,11,12]], columns= midx)
+ midx = MultiIndex.from_tuples([('f1', 's1'), ('f1', 's2'),
+ ('f2', 's1'), ('f2', 's2'),
+ ('f3', 's1'), ('f3', 's2')])
+ df = DataFrame(
+ [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12]], columns=midx)
df1 = df.select(lambda u: u[0] in ['f2', 'f3'], axis=1)
grouped = df1.groupby(axis=1, level=0)
@@ -970,8 +974,8 @@ def test_swaplevel(self):
assert_frame_equal(swapped, exp)
def test_swaplevel_panel(self):
- panel = Panel({'ItemA' : self.frame,
- 'ItemB' : self.frame * 2})
+ panel = Panel({'ItemA': self.frame,
+ 'ItemB': self.frame * 2})
result = panel.swaplevel(0, 1, axis='major')
expected = panel.copy()
@@ -1001,11 +1005,11 @@ def test_insert_index(self):
self.assert_((df[2000, 1, 10] == df[2000, 1, 7]).all())
def test_alignment(self):
- x = Series(data=[1,2,3],
- index=MultiIndex.from_tuples([("A", 1), ("A", 2), ("B",3)]))
+ x = Series(data=[1, 2, 3],
+ index=MultiIndex.from_tuples([("A", 1), ("A", 2), ("B", 3)]))
- y = Series(data=[4,5,6],
- index=MultiIndex.from_tuples([("Z", 1), ("Z", 2), ("B",3)]))
+ y = Series(data=[4, 5, 6],
+ index=MultiIndex.from_tuples([("Z", 1), ("Z", 2), ("B", 3)]))
res = x - y
exp_index = x.index.union(y.index)
@@ -1071,7 +1075,7 @@ def test_frame_getitem_not_sorted(self):
def test_series_getitem_not_sorted(self):
arrays = [['bar', 'bar', 'baz', 'baz', 'qux', 'qux', 'foo', 'foo'],
- ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
+ ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
tuples = zip(*arrays)
index = MultiIndex.from_tuples(tuples)
s = Series(randn(8), index=index)
@@ -1140,6 +1144,7 @@ def test_frame_group_ops(self):
grouped = frame.groupby(level=level, axis=axis)
pieces = []
+
def aggf(x):
pieces.append(x)
return getattr(x, op)(skipna=skipna, axis=axis)
@@ -1163,9 +1168,11 @@ def test_stat_op_corner(self):
assert_series_equal(result, expected)
def test_frame_any_all_group(self):
- df = DataFrame({'data': [False, False, True, False, True, False, True]},
- index=[['one', 'one', 'two', 'one', 'two', 'two', 'two'],
- [0, 1, 0, 2, 1, 2, 3]])
+ df = DataFrame(
+ {'data': [False, False, True, False, True, False, True]},
+ index=[
+ ['one', 'one', 'two', 'one', 'two', 'two', 'two'],
+ [0, 1, 0, 2, 1, 2, 3]])
result = df.any(level=0)
ex = DataFrame({'data': [False, True]}, index=['one', 'two'])
@@ -1192,7 +1199,6 @@ def test_std_var_pass_ddof(self):
expected = df.groupby(level=0).agg(alt)
assert_frame_equal(result, expected)
-
def test_frame_series_agg_multiple_levels(self):
result = self.ymd.sum(level=['year', 'month'])
expected = self.ymd.groupby(level=['year', 'month']).sum()
@@ -1395,65 +1401,65 @@ def test_int_series_slicing(self):
assert_frame_equal(result, expected)
def test_mixed_depth_get(self):
- arrays = [[ 'a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
- [ '', 'OD', 'OD', 'result1', 'result2', 'result1'],
- [ '', 'wx', 'wy', '', '', '']]
+ arrays = [['a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
+ ['', 'OD', 'OD', 'result1', 'result2', 'result1'],
+ ['', 'wx', 'wy', '', '', '']]
tuples = zip(*arrays)
tuples.sort()
index = MultiIndex.from_tuples(tuples)
- df = DataFrame(randn(4,6),columns = index)
+ df = DataFrame(randn(4, 6), columns=index)
result = df['a']
- expected = df['a','','']
+ expected = df['a', '', '']
assert_series_equal(result, expected)
self.assertEquals(result.name, 'a')
- result = df['routine1','result1']
- expected = df['routine1','result1','']
+ result = df['routine1', 'result1']
+ expected = df['routine1', 'result1', '']
assert_series_equal(result, expected)
self.assertEquals(result.name, ('routine1', 'result1'))
def test_mixed_depth_insert(self):
- arrays = [[ 'a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
- [ '', 'OD', 'OD', 'result1', 'result2', 'result1'],
- [ '', 'wx', 'wy', '', '', '']]
+ arrays = [['a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
+ ['', 'OD', 'OD', 'result1', 'result2', 'result1'],
+ ['', 'wx', 'wy', '', '', '']]
tuples = zip(*arrays)
tuples.sort()
index = MultiIndex.from_tuples(tuples)
- df = DataFrame(randn(4,6),columns = index)
+ df = DataFrame(randn(4, 6), columns=index)
result = df.copy()
expected = df.copy()
- result['b'] = [1,2,3,4]
- expected['b','',''] = [1,2,3,4]
+ result['b'] = [1, 2, 3, 4]
+ expected['b', '', ''] = [1, 2, 3, 4]
assert_frame_equal(result, expected)
def test_mixed_depth_drop(self):
- arrays = [[ 'a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
- [ '', 'OD', 'OD', 'result1', 'result2', 'result1'],
- [ '', 'wx', 'wy', '', '', '']]
+ arrays = [['a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
+ ['', 'OD', 'OD', 'result1', 'result2', 'result1'],
+ ['', 'wx', 'wy', '', '', '']]
tuples = zip(*arrays)
tuples.sort()
index = MultiIndex.from_tuples(tuples)
- df = DataFrame(randn(4,6),columns = index)
+ df = DataFrame(randn(4, 6), columns=index)
- result = df.drop('a',axis=1)
- expected = df.drop([('a','','')],axis=1)
+ result = df.drop('a', axis=1)
+ expected = df.drop([('a', '', '')], axis=1)
assert_frame_equal(expected, result)
- result = df.drop(['top'],axis=1)
- expected = df.drop([('top','OD','wx')], axis=1)
- expected = expected.drop([('top','OD','wy')], axis=1)
+ result = df.drop(['top'], axis=1)
+ expected = df.drop([('top', 'OD', 'wx')], axis=1)
+ expected = expected.drop([('top', 'OD', 'wy')], axis=1)
assert_frame_equal(expected, result)
result = df.drop(('top', 'OD', 'wx'), axis=1)
- expected = df.drop([('top','OD','wx')], axis=1)
+ expected = df.drop([('top', 'OD', 'wx')], axis=1)
assert_frame_equal(expected, result)
- expected = df.drop([('top','OD','wy')], axis=1)
+ expected = df.drop([('top', 'OD', 'wy')], axis=1)
expected = df.drop('top', axis=1)
result = df.drop('result1', level=1, axis=1)
@@ -1462,11 +1468,11 @@ def test_mixed_depth_drop(self):
assert_frame_equal(expected, result)
def test_drop_nonunique(self):
- df = DataFrame([["x-a", "x", "a", 1.5],["x-a", "x", "a", 1.2],
+ df = DataFrame([["x-a", "x", "a", 1.5], ["x-a", "x", "a", 1.2],
["z-c", "z", "c", 3.1], ["x-a", "x", "a", 4.1],
- ["x-b", "x", "b", 5.1],["x-b", "x", "b", 4.1],
+ ["x-b", "x", "b", 5.1], ["x-b", "x", "b", 4.1],
["x-b", "x", "b", 2.2],
- ["y-a", "y", "a", 1.2],["z-b", "z", "b", 2.1]],
+ ["y-a", "y", "a", 1.2], ["z-b", "z", "b", 2.1]],
columns=["var1", "var2", "var3", "var4"])
grp_size = df.groupby("var1").size()
@@ -1483,25 +1489,25 @@ def test_drop_nonunique(self):
assert_frame_equal(result, expected)
def test_mixed_depth_pop(self):
- arrays = [[ 'a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
- [ '', 'OD', 'OD', 'result1', 'result2', 'result1'],
- [ '', 'wx', 'wy', '', '', '']]
+ arrays = [['a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
+ ['', 'OD', 'OD', 'result1', 'result2', 'result1'],
+ ['', 'wx', 'wy', '', '', '']]
tuples = zip(*arrays)
tuples.sort()
index = MultiIndex.from_tuples(tuples)
- df = DataFrame(randn(4,6),columns = index)
+ df = DataFrame(randn(4, 6), columns=index)
df1 = df.copy()
df2 = df.copy()
result = df1.pop('a')
- expected = df2.pop(('a','',''))
+ expected = df2.pop(('a', '', ''))
assert_series_equal(expected, result)
assert_frame_equal(df1, df2)
- self.assertEquals(result.name,'a')
+ self.assertEquals(result.name, 'a')
expected = df1['top']
- df1 = df1.drop(['top'],axis=1)
+ df1 = df1.drop(['top'], axis=1)
result = df2.pop('top')
assert_frame_equal(expected, result)
assert_frame_equal(df1, df2)
@@ -1580,7 +1586,7 @@ def test_drop_preserve_names(self):
self.assert_(result.index.names == ['one', 'two'])
def test_unicode_repr_issues(self):
- levels = [Index([u'a/\u03c3', u'b/\u03c3',u'c/\u03c3']),
+ levels = [Index([u'a/\u03c3', u'b/\u03c3', u'c/\u03c3']),
Index([0, 1])]
labels = [np.arange(3).repeat(2), np.tile(np.arange(2), 3)]
index = MultiIndex(levels=levels, labels=labels)
@@ -1595,15 +1601,16 @@ def test_unicode_repr_level_names(self):
names=[u'\u0394', 'i1'])
s = Series(range(2), index=index)
- df = DataFrame(np.random.randn(2,4), index=index)
+ df = DataFrame(np.random.randn(2, 4), index=index)
repr(s)
repr(df)
def test_dataframe_insert_column_all_na(self):
# GH #1534
- mix = MultiIndex.from_tuples([('1a', '2a'), ('1a', '2b'), ('1a', '2c')])
- df = DataFrame([[1,2],[3,4],[5,6]], index=mix)
- s = Series({(1,1): 1, (1,2): 2})
+ mix = MultiIndex.from_tuples(
+ [('1a', '2a'), ('1a', '2b'), ('1a', '2c')])
+ df = DataFrame([[1, 2], [3, 4], [5, 6]], index=mix)
+ s = Series({(1, 1): 1, (1, 2): 2})
df['new'] = s
self.assert_(df['new'].isnull().all())
@@ -1628,20 +1635,21 @@ def test_set_column_scalar_with_ix(self):
self.assert_((self.frame.ix[subset, 'B'] == 97).all())
def test_frame_dict_constructor_empty_series(self):
- s1 = Series([1,2,3, 4], index=MultiIndex.from_tuples([(1,2),(1,3),
- (2,2),(2,4)]))
- s2 = Series([1,2,3,4],
- index=MultiIndex.from_tuples([(1,2),(1,3),(3,2),(3,4)]))
+ s1 = Series([1, 2, 3, 4], index=MultiIndex.from_tuples([(1, 2), (1, 3),
+ (2, 2), (2, 4)]))
+ s2 = Series([1, 2, 3, 4],
+ index=MultiIndex.from_tuples([(1, 2), (1, 3), (3, 2), (3, 4)]))
s3 = Series()
# it works!
- df = DataFrame({'foo':s1, 'bar':s2, 'baz':s3})
- df = DataFrame.from_dict({'foo':s1, 'baz':s3, 'bar':s2})
+ df = DataFrame({'foo': s1, 'bar': s2, 'baz': s3})
+ df = DataFrame.from_dict({'foo': s1, 'baz': s3, 'bar': s2})
def test_indexing_ambiguity_bug_1678(self):
columns = MultiIndex.from_tuples([('Ohio', 'Green'), ('Ohio', 'Red'),
('Colorado', 'Green')])
- index = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2)])
+ index = MultiIndex.from_tuples(
+ [('a', 1), ('a', 2), ('b', 1), ('b', 2)])
frame = DataFrame(np.arange(12).reshape((4, 3)), index=index,
columns=columns)
@@ -1679,7 +1687,7 @@ def test_indexing_over_hashtable_size_cutoff(self):
_index._SIZE_CUTOFF = old_cutoff
def test_xs_mixed_no_copy(self):
- index = MultiIndex.from_arrays([['a','a', 'b', 'b'], [1,2,1,2]],
+ index = MultiIndex.from_arrays([['a', 'a', 'b', 'b'], [1, 2, 1, 2]],
names=['first', 'second'])
data = DataFrame(np.random.rand(len(index)), index=index,
columns=['A'])
@@ -1704,16 +1712,16 @@ def test_multiindex_na_repr(self):
def test_assign_index_sequences(self):
# #2200
- df = DataFrame({"a":[1,2,3],
- "b":[4,5,6],
- "c":[7,8,9]}).set_index(["a","b"])
+ df = DataFrame({"a": [1, 2, 3],
+ "b": [4, 5, 6],
+ "c": [7, 8, 9]}).set_index(["a", "b"])
l = list(df.index)
- l[0]=("faz","boo")
+ l[0] = ("faz", "boo")
df.index = l
repr(df)
# this travels an improper code path
- l[0] = ["faz","boo"]
+ l[0] = ["faz", "boo"]
df.index = l
repr(df)
@@ -1732,5 +1740,5 @@ def test_tuples_have_na(self):
import nose
# nose.runmodule(argv=[__file__,'-vvs','-x', '--pdb-failure'],
# exit=False)
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_ndframe.py b/pandas/tests/test_ndframe.py
index c1fd16f5bc0b0..e017bf07039d7 100644
--- a/pandas/tests/test_ndframe.py
+++ b/pandas/tests/test_ndframe.py
@@ -5,6 +5,7 @@
from pandas.core.generic import NDFrame
import pandas.util.testing as t
+
class TestNDFrame(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -27,5 +28,5 @@ def test_astype(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 355da62b63c08..6e45bd9fd3049 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -22,6 +22,7 @@
import pandas.core.panel as panelm
import pandas.util.testing as tm
+
def _skip_if_no_scipy():
try:
import scipy.stats
@@ -42,8 +43,10 @@ def test_cumsum(self):
cumsum = self.panel.cumsum()
assert_frame_equal(cumsum['ItemA'], self.panel['ItemA'].cumsum())
+
class SafeForLongAndSparse(object):
_multiprocess_can_split_ = True
+
def test_repr(self):
foo = repr(self.panel)
@@ -82,6 +85,7 @@ def test_skew(self):
from scipy.stats import skew
except ImportError:
raise nose.SkipTest
+
def this_skew(x):
if len(x) < 3:
return np.nan
@@ -149,8 +153,10 @@ def wrapper(x):
self.assertRaises(Exception, f, axis=obj.ndim)
+
class SafeForSparse(object):
_multiprocess_can_split_ = True
+
@classmethod
def assert_panel_equal(cls, x, y):
assert_panel_equal(x, y)
@@ -240,7 +246,7 @@ def test_arith(self):
self._test_op(self.panel, lambda x, y: x - y) # panel - 1
self._test_op(self.panel, lambda x, y: x * y) # panel * 1
self._test_op(self.panel, lambda x, y: x / y) # panel / 1
- self._test_op(self.panel, lambda x, y: x ** y) # panel ** 1
+ self._test_op(self.panel, lambda x, y: x ** y) # panel ** 1
self.assertRaises(Exception, self.panel.__add__, self.panel['ItemA'])
@@ -351,9 +357,11 @@ def test_abs(self):
expected = np.abs(s)
assert_series_equal(result, expected)
+
class CheckIndexing(object):
_multiprocess_can_split_ = True
+
def test_getitem(self):
self.assertRaises(Exception, self.panel.__getitem__, 'ItemQ')
@@ -427,15 +435,15 @@ def test_setitem(self):
def test_setitem_ndarray(self):
from pandas import date_range, datetools
- timeidx = date_range(start=datetime(2009,1,1),
- end=datetime(2009,12,31),
+ timeidx = date_range(start=datetime(2009, 1, 1),
+ end=datetime(2009, 12, 31),
freq=datetools.MonthEnd())
lons_coarse = np.linspace(-177.5, 177.5, 72)
lats_coarse = np.linspace(-87.5, 87.5, 36)
P = Panel(items=timeidx, major_axis=lons_coarse,
minor_axis=lats_coarse)
- data = np.random.randn(72*36).reshape((72,36))
- key = datetime(2009,2,28)
+ data = np.random.randn(72 * 36).reshape((72, 36))
+ key = datetime(2009, 2, 28)
P[key] = data
assert_almost_equal(P[key].values, data)
@@ -588,9 +596,10 @@ def test_getitem_fancy_xs_check_view(self):
self._check_view((NS, date, 'C'), comp)
def test_ix_setitem_slice_dataframe(self):
- a = Panel(items=[1,2,3],major_axis=[11,22,33],minor_axis=[111,222,333])
- b = DataFrame(np.random.randn(2,3), index=[111,333],
- columns=[1,2,3])
+ a = Panel(items=[1, 2, 3], major_axis=[11, 22, 33],
+ minor_axis=[111, 222, 333])
+ b = DataFrame(np.random.randn(2, 3), index=[111, 333],
+ columns=[1, 2, 3])
a.ix[:, 22, [111, 333]] = b
@@ -643,14 +652,15 @@ def _check_view(self, indexer, comp):
comp(cp.ix[indexer].reindex_like(obj), obj)
def test_logical_with_nas(self):
- d = Panel({ 'ItemA' : {'a': [np.nan, False] }, 'ItemB' : { 'a': [True, True] } })
+ d = Panel({'ItemA': {'a': [np.nan, False]}, 'ItemB': {
+ 'a': [True, True]}})
result = d['ItemA'] | d['ItemB']
- expected = DataFrame({ 'a' : [np.nan, True] })
+ expected = DataFrame({'a': [np.nan, True]})
assert_frame_equal(result, expected)
result = d['ItemA'].fillna(False) | d['ItemB']
- expected = DataFrame({ 'a' : [True, True] }, dtype=object)
+ expected = DataFrame({'a': [True, True]}, dtype=object)
assert_frame_equal(result, expected)
def test_neg(self):
@@ -658,13 +668,13 @@ def test_neg(self):
assert_panel_equal(-self.panel, -1 * self.panel)
def test_invert(self):
- assert_panel_equal(-(self.panel < 0), ~(self.panel <0))
+ assert_panel_equal(-(self.panel < 0), ~(self.panel < 0))
def test_comparisons(self):
p1 = tm.makePanel()
p2 = tm.makePanel()
- tp = p1.reindex(items = p1.items + ['foo'])
+ tp = p1.reindex(items=p1.items + ['foo'])
df = p1[p1.items[0]]
def test_comp(func):
@@ -719,12 +729,14 @@ def test_set_value(self):
_panel = tm.makePanel()
tm.add_nans(_panel)
+
class TestPanel(unittest.TestCase, PanelTests, CheckIndexing,
SafeForLongAndSparse,
SafeForSparse):
_multiprocess_can_split_ = True
+
@classmethod
- def assert_panel_equal(cls,x, y):
+ def assert_panel_equal(cls, x, y):
assert_panel_equal(x, y)
def setUp(self):
@@ -743,8 +755,8 @@ def test_constructor(self):
assert_panel_equal(wp, self.panel)
# strings handled prop
- wp = Panel([[['foo', 'foo', 'foo',],
- ['foo', 'foo', 'foo']]])
+ wp = Panel([[['foo', 'foo', 'foo', ],
+ ['foo', 'foo', 'foo']]])
self.assert_(wp.values.dtype == np.object_)
vals = self.panel.values
@@ -796,14 +808,14 @@ def test_ctor_dict(self):
itema = self.panel['ItemA']
itemb = self.panel['ItemB']
- d = {'A' : itema, 'B' : itemb[5:]}
- d2 = {'A' : itema._series, 'B' : itemb[5:]._series}
- d3 = {'A' : None,
- 'B' : DataFrame(itemb[5:]._series),
- 'C' : DataFrame(itema._series)}
+ d = {'A': itema, 'B': itemb[5:]}
+ d2 = {'A': itema._series, 'B': itemb[5:]._series}
+ d3 = {'A': None,
+ 'B': DataFrame(itemb[5:]._series),
+ 'C': DataFrame(itema._series)}
wp = Panel.from_dict(d)
- wp2 = Panel.from_dict(d2) # nested Dict
+ wp2 = Panel.from_dict(d2) # nested Dict
wp3 = Panel.from_dict(d3)
self.assert_(wp.major_axis.equals(self.panel.major_axis))
assert_panel_equal(wp, wp2)
@@ -818,9 +830,9 @@ def test_ctor_dict(self):
assert_panel_equal(Panel(d3), Panel.from_dict(d3))
# a pathological case
- d4 = { 'A' : None, 'B' : None }
+ d4 = {'A': None, 'B': None}
wp4 = Panel.from_dict(d4)
- assert_panel_equal(Panel(d4), Panel(items = ['A','B']))
+ assert_panel_equal(Panel(d4), Panel(items=['A', 'B']))
# cast
dcasted = dict((k, v.reindex(wp.major_axis).fillna(0))
@@ -879,8 +891,8 @@ def test_from_dict_mixed_orient(self):
df = tm.makeDataFrame()
df['foo'] = 'bar'
- data = {'k1' : df,
- 'k2' : df}
+ data = {'k1': df,
+ 'k2': df}
panel = Panel.from_dict(data, orient='minor')
@@ -1174,12 +1186,12 @@ def test_shift(self):
assert_panel_equal(result, expected)
def test_multiindex_get(self):
- ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b',2)],
+ ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2)],
names=['first', 'second'])
- wp = Panel(np.random.random((4,5,5)),
- items=ind,
- major_axis=np.arange(5),
- minor_axis=np.arange(5))
+ wp = Panel(np.random.random((4, 5, 5)),
+ items=ind,
+ major_axis=np.arange(5),
+ minor_axis=np.arange(5))
f1 = wp['a']
f2 = wp.ix['a']
assert_panel_equal(f1, f2)
@@ -1198,7 +1210,7 @@ def test_multiindex_blocks(self):
f1 = wp['a']
self.assert_((f1.items == [1, 2]).all())
- f1 = wp[('b',1)]
+ f1 = wp[('b', 1)]
self.assert_((f1.columns == ['A', 'B', 'C', 'D']).all())
def test_repr_empty(self):
@@ -1207,9 +1219,9 @@ def test_repr_empty(self):
def test_rename(self):
mapper = {
- 'ItemA' : 'foo',
- 'ItemB' : 'bar',
- 'ItemC' : 'baz'
+ 'ItemA': 'foo',
+ 'ItemB': 'bar',
+ 'ItemC': 'baz'
}
renamed = self.panel.rename_axis(mapper, axis=0)
@@ -1239,14 +1251,14 @@ def test_group_agg(self):
assert(agged[2][0] == 4.5)
# test a function that doesn't aggregate
- f2 = lambda x: np.zeros((2,2))
+ f2 = lambda x: np.zeros((2, 2))
self.assertRaises(Exception, group_agg, values, bounds, f2)
def test_from_frame_level1_unsorted(self):
tuples = [('MSFT', 3), ('MSFT', 2), ('AAPL', 2),
('AAPL', 1), ('MSFT', 1)]
midx = MultiIndex.from_tuples(tuples)
- df = DataFrame(np.random.rand(5,4), index=midx)
+ df = DataFrame(np.random.rand(5, 4), index=midx)
p = df.to_panel()
assert_frame_equal(p.minor_xs(2), df.xs(2, level=1).sort_index())
@@ -1265,7 +1277,7 @@ def test_to_excel(self):
self.panel.to_excel(path)
reader = ExcelFile(path)
for item, df in self.panel.iterkv():
- recdf = reader.parse(str(item),index_col=0)
+ recdf = reader.parse(str(item), index_col=0)
assert_frame_equal(df, recdf)
os.remove(path)
@@ -1305,21 +1317,21 @@ def test_update(self):
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
- [[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]]])
+ [[1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]])
other = Panel([[[3.6, 2., np.nan],
- [np.nan, np.nan, 7]]], items=[1])
+ [np.nan, np.nan, 7]]], items=[1])
pan.update(other)
expected = Panel([[[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]],
- [[3.6, 2., 3],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]],
+ [[3.6, 2., 3],
[1.5, np.nan, 7],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
@@ -1328,27 +1340,27 @@ def test_update(self):
def test_update_from_dict(self):
pan = Panel({'one': DataFrame([[1.5, np.nan, 3],
- [1.5, np.nan, 3],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]]),
- 'two': DataFrame([[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]])})
+ [1.5, np.nan, 3],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]),
+ 'two': DataFrame([[1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]])})
other = {'two': DataFrame([[3.6, 2., np.nan],
- [np.nan, np.nan, 7]])}
+ [np.nan, np.nan, 7]])}
pan.update(other)
expected = Panel({'two': DataFrame([[3.6, 2., 3],
- [1.5, np.nan, 7],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]]),
+ [1.5, np.nan, 7],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]),
'one': DataFrame([[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]])})
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]])})
assert_panel_equal(pan, expected)
@@ -1357,10 +1369,10 @@ def test_update_nooverwrite(self):
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
- [[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]]])
+ [[1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]])
other = Panel([[[3.6, 2., np.nan],
[np.nan, np.nan, 7]]], items=[1])
@@ -1368,10 +1380,10 @@ def test_update_nooverwrite(self):
pan.update(other, overwrite=False)
expected = Panel([[[1.5, np.nan, 3],
- [1.5, np.nan, 3],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]],
- [[1.5, 2., 3.],
+ [1.5, np.nan, 3],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]],
+ [[1.5, 2., 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
@@ -1383,21 +1395,21 @@ def test_update_filtered(self):
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
- [[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]]])
+ [[1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]])
other = Panel([[[3.6, 2., np.nan],
- [np.nan, np.nan, 7]]], items=[1])
+ [np.nan, np.nan, 7]]], items=[1])
pan.update(other, filter_func=lambda x: x > 2)
expected = Panel([[[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]],
- [[1.5, np.nan, 3],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]],
+ [[1.5, np.nan, 3],
[1.5, np.nan, 7],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
@@ -1409,19 +1421,21 @@ def test_update_raise(self):
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
- [[1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.],
- [1.5, np.nan, 3.]]])
+ [[1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]])
np.testing.assert_raises(Exception, pan.update, *(pan,),
- **{'raise_conflict': True})
+ **{'raise_conflict': True})
+
class TestLongPanel(unittest.TestCase):
"""
LongPanel no longer exists, but...
"""
_multiprocess_can_split_ = True
+
def setUp(self):
panel = tm.makePanel()
tm.add_nans(panel)
@@ -1546,13 +1560,13 @@ def test_axis_dummies(self):
self.assertEqual(len(major_dummies.columns),
len(self.panel.index.levels[0]))
- mapping = {'A' : 'one',
- 'B' : 'one',
- 'C' : 'two',
- 'D' : 'two'}
+ mapping = {'A': 'one',
+ 'B': 'one',
+ 'C': 'two',
+ 'D': 'two'}
transformed = make_axis_dummies(self.panel, 'minor',
- transform=mapping.get)
+ transform=mapping.get)
self.assertEqual(len(transformed.columns), 2)
self.assert_(np.array_equal(transformed.columns, ['one', 'two']))
@@ -1633,6 +1647,7 @@ def test_pivot(self):
# corner case, empty
df = pivot(np.array([]), np.array([]), np.array([]))
+
def test_monotonic():
pos = np.array([1, 2, 3, 5])
@@ -1646,13 +1661,14 @@ def test_monotonic():
assert not panelm._monotonic(neg2)
+
def test_panel_index():
- index = panelm.panel_index([1,2,3,4], [1,2,3])
- expected = MultiIndex.from_arrays([np.tile([1,2,3,4], 3),
- np.repeat([1,2,3], 4)])
+ index = panelm.panel_index([1, 2, 3, 4], [1, 2, 3])
+ expected = MultiIndex.from_arrays([np.tile([1, 2, 3, 4], 3),
+ np.repeat([1, 2, 3], 4)])
assert(index.equals(expected))
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 088e9a9fa137c..e0180f475ca45 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -24,11 +24,13 @@
assert_almost_equal)
import pandas.util.testing as tm
+
def add_nans(panel4d):
for l, label in enumerate(panel4d.labels):
panel = panel4d[label]
tm.add_nans(panel)
+
class SafeForLongAndSparse(object):
_multiprocess_can_split_ = True
@@ -139,6 +141,7 @@ def wrapper(x):
self.assertRaises(Exception, f, axis=obj.ndim)
+
class SafeForSparse(object):
_multiprocess_can_split_ = True
@@ -159,9 +162,9 @@ def test_get_axis(self):
def test_set_axis(self):
new_labels = Index(np.arange(len(self.panel4d.labels)))
- new_items = Index(np.arange(len(self.panel4d.items)))
- new_major = Index(np.arange(len(self.panel4d.major_axis)))
- new_minor = Index(np.arange(len(self.panel4d.minor_axis)))
+ new_items = Index(np.arange(len(self.panel4d.items)))
+ new_major = Index(np.arange(len(self.panel4d.major_axis)))
+ new_minor = Index(np.arange(len(self.panel4d.minor_axis)))
# ensure propagate to potentially prior-cached items too
label = self.panel4d['l1']
@@ -236,7 +239,7 @@ def test_select(self):
# select labels
result = p.select(lambda x: x in ('l1', 'l3'), axis='labels')
- expected = p.reindex(labels=['l1','l3'])
+ expected = p.reindex(labels=['l1', 'l3'])
self.assert_panel4d_equal(result, expected)
# select items
@@ -282,6 +285,7 @@ def test_abs(self):
expected = np.abs(df)
assert_frame_equal(result, expected)
+
class CheckIndexing(object):
_multiprocess_can_split_ = True
@@ -291,7 +295,7 @@ def test_getitem(self):
def test_delitem_and_pop(self):
expected = self.panel4d['l2']
- result = self.panel4d.pop('l2')
+ result = self.panel4d.pop('l2')
assert_panel_equal(expected, result)
self.assert_('l2' not in self.panel4d.labels)
@@ -335,20 +339,21 @@ def test_delitem_and_pop(self):
def test_setitem(self):
## LongPanel with one item
- #lp = self.panel.filter(['ItemA', 'ItemB']).to_frame()
- #self.assertRaises(Exception, self.panel.__setitem__,
+ # lp = self.panel.filter(['ItemA', 'ItemB']).to_frame()
+ # self.assertRaises(Exception, self.panel.__setitem__,
# 'ItemE', lp)
# Panel
- p = Panel(dict(ItemA = self.panel4d['l1']['ItemA'][2:].filter(items=['A', 'B'])))
+ p = Panel(dict(
+ ItemA=self.panel4d['l1']['ItemA'][2:].filter(items=['A', 'B'])))
self.panel4d['l4'] = p
self.panel4d['l5'] = p
p2 = self.panel4d['l4']
- assert_panel_equal(p, p2.reindex(items = p.items,
- major_axis = p.major_axis,
- minor_axis = p.minor_axis))
+ assert_panel_equal(p, p2.reindex(items=p.items,
+ major_axis=p.major_axis,
+ minor_axis=p.minor_axis))
# scalar
self.panel4d['lG'] = 1
@@ -368,8 +373,8 @@ def test_comparisons(self):
p1 = tm.makePanel4D()
p2 = tm.makePanel4D()
- tp = p1.reindex(labels = p1.labels + ['foo'])
- p = p1[p1.labels[0]]
+ tp = p1.reindex(labels=p1.labels + ['foo'])
+ p = p1[p1.labels[0]]
def test_comp(func):
result = func(p1, p2)
@@ -413,7 +418,7 @@ def test_major_xs(self):
ref = self.panel4d['l1']['ItemA']
idx = self.panel4d.major_axis[5]
- xs = self.panel4d.major_xs(idx)
+ xs = self.panel4d.major_xs(idx)
assert_series_equal(xs['l1'].T['ItemA'], ref.xs(idx))
@@ -468,9 +473,9 @@ def test_getitem_fancy_labels(self):
panel4d = self.panel4d
labels = panel4d.labels[[1, 0]]
- items = panel4d.items[[1, 0]]
- dates = panel4d.major_axis[::2]
- cols = ['D', 'C', 'F']
+ items = panel4d.items[[1, 0]]
+ dates = panel4d.major_axis[::2]
+ cols = ['D', 'C', 'F']
# all 4 specified
assert_panel4d_equal(panel4d.ix[labels, items, dates, cols],
@@ -508,15 +513,16 @@ def test_getitem_fancy_ints(self):
def test_getitem_fancy_xs(self):
raise nose.SkipTest
- #self.assertRaises(NotImplementedError, self.panel4d.major_xs)
- #self.assertRaises(NotImplementedError, self.panel4d.minor_xs)
+ # self.assertRaises(NotImplementedError, self.panel4d.major_xs)
+ # self.assertRaises(NotImplementedError, self.panel4d.minor_xs)
def test_get_value(self):
for label in self.panel4d.labels:
for item in self.panel4d.items:
for mjr in self.panel4d.major_axis[::2]:
for mnr in self.panel4d.minor_axis:
- result = self.panel4d.get_value(label, item, mjr, mnr)
+ result = self.panel4d.get_value(
+ label, item, mjr, mnr)
expected = self.panel4d[label][item][mnr][mjr]
assert_almost_equal(result, expected)
@@ -526,7 +532,8 @@ def test_set_value(self):
for mjr in self.panel4d.major_axis[::2]:
for mnr in self.panel4d.minor_axis:
self.panel4d.set_value(label, item, mjr, mnr, 1.)
- assert_almost_equal(self.panel4d[label][item][mnr][mjr], 1.)
+ assert_almost_equal(
+ self.panel4d[label][item][mnr][mjr], 1.)
# resize
res = self.panel4d.set_value('l4', 'ItemE', 'foo', 'bar', 1.5)
@@ -537,13 +544,14 @@ def test_set_value(self):
res3 = self.panel4d.set_value('l4', 'ItemE', 'foobar', 'baz', 5)
self.assert_(com.is_float_dtype(res3['l4'].values))
+
class TestPanel4d(unittest.TestCase, CheckIndexing, SafeForSparse,
SafeForLongAndSparse):
_multiprocess_can_split_ = True
@classmethod
- def assert_panel4d_equal(cls,x, y):
+ def assert_panel4d_equal(cls, x, y):
assert_panel4d_equal(x, y)
def setUp(self):
@@ -560,9 +568,9 @@ def test_constructor(self):
assert_panel4d_equal(panel4d, self.panel4d)
# strings handled prop
- #panel4d = Panel4D([[['foo', 'foo', 'foo',],
+ # panel4d = Panel4D([[['foo', 'foo', 'foo',],
# ['foo', 'foo', 'foo']]])
- #self.assert_(wp.values.dtype == np.object_)
+ # self.assert_(wp.values.dtype == np.object_)
vals = self.panel4d.values
@@ -613,34 +621,35 @@ def test_ctor_dict(self):
l1 = self.panel4d['l1']
l2 = self.panel4d['l2']
- d = {'A' : l1, 'B' : l2.ix[['ItemB'],:,:] }
- #d2 = {'A' : itema._series, 'B' : itemb[5:]._series}
- #d3 = {'A' : DataFrame(itema._series),
+ d = {'A': l1, 'B': l2.ix[['ItemB'], :, :]}
+ # d2 = {'A' : itema._series, 'B' : itemb[5:]._series}
+ # d3 = {'A' : DataFrame(itema._series),
# 'B' : DataFrame(itemb[5:]._series)}
panel4d = Panel4D(d)
- #wp2 = Panel.from_dict(d2) # nested Dict
- #wp3 = Panel.from_dict(d3)
- #self.assert_(wp.major_axis.equals(self.panel.major_axis))
+ # wp2 = Panel.from_dict(d2) # nested Dict
+ # wp3 = Panel.from_dict(d3)
+ # self.assert_(wp.major_axis.equals(self.panel.major_axis))
assert_panel_equal(panel4d['A'], self.panel4d['l1'])
- assert_frame_equal(panel4d.ix['B','ItemB',:,:], self.panel4d.ix['l2',['ItemB'],:,:]['ItemB'])
+ assert_frame_equal(panel4d.ix['B', 'ItemB', :, :],
+ self.panel4d.ix['l2', ['ItemB'], :, :]['ItemB'])
# intersect
- #wp = Panel.from_dict(d, intersect=True)
- #self.assert_(wp.major_axis.equals(itemb.index[5:]))
+ # wp = Panel.from_dict(d, intersect=True)
+ # self.assert_(wp.major_axis.equals(itemb.index[5:]))
# use constructor
- #assert_panel_equal(Panel(d), Panel.from_dict(d))
- #assert_panel_equal(Panel(d2), Panel.from_dict(d2))
- #assert_panel_equal(Panel(d3), Panel.from_dict(d3))
+ # assert_panel_equal(Panel(d), Panel.from_dict(d))
+ # assert_panel_equal(Panel(d2), Panel.from_dict(d2))
+ # assert_panel_equal(Panel(d3), Panel.from_dict(d3))
# cast
- #dcasted = dict((k, v.reindex(wp.major_axis).fillna(0))
+ # dcasted = dict((k, v.reindex(wp.major_axis).fillna(0))
# for k, v in d.iteritems())
- #result = Panel(dcasted, dtype=int)
- #expected = Panel(dict((k, v.astype(int))
+ # result = Panel(dcasted, dtype=int)
+ # expected = Panel(dict((k, v.astype(int))
# for k, v in dcasted.iteritems()))
- #assert_panel_equal(result, expected)
+ # assert_panel_equal(result, expected)
def test_constructor_dict_mixed(self):
data = dict((k, v.values) for k, v in self.panel4d.iterkv())
@@ -649,10 +658,10 @@ def test_constructor_dict_mixed(self):
self.assert_(result.major_axis.equals(exp_major))
result = Panel4D(data,
- labels = self.panel4d.labels,
- items = self.panel4d.items,
- major_axis = self.panel4d.major_axis,
- minor_axis = self.panel4d.minor_axis)
+ labels=self.panel4d.labels,
+ items=self.panel4d.items,
+ major_axis=self.panel4d.major_axis,
+ minor_axis=self.panel4d.minor_axis)
assert_panel4d_equal(result, self.panel4d)
data['l2'] = self.panel4d['l2']
@@ -667,14 +676,16 @@ def test_constructor_dict_mixed(self):
self.assertRaises(Exception, Panel4D, data)
def test_constructor_resize(self):
- data = self.panel4d._data
- labels= self.panel4d.labels[:-1]
+ data = self.panel4d._data
+ labels = self.panel4d.labels[:-1]
items = self.panel4d.items[:-1]
major = self.panel4d.major_axis[:-1]
minor = self.panel4d.minor_axis[:-1]
- result = Panel4D(data, labels=labels, items=items, major_axis=major, minor_axis=minor)
- expected = self.panel4d.reindex(labels=labels, items=items, major=major, minor=minor)
+ result = Panel4D(data, labels=labels, items=items,
+ major_axis=major, minor_axis=minor)
+ expected = self.panel4d.reindex(
+ labels=labels, items=items, major=major, minor=minor)
assert_panel4d_equal(result, expected)
result = Panel4D(data, items=items, major_axis=major)
@@ -718,7 +729,7 @@ def test_reindex(self):
ref = self.panel4d['l2']
# labels
- result = self.panel4d.reindex(labels=['l1','l2'])
+ result = self.panel4d.reindex(labels=['l1', 'l2'])
assert_panel_equal(result['l2'], ref)
# items
@@ -728,7 +739,8 @@ def test_reindex(self):
# major
new_major = list(self.panel4d.major_axis[:10])
result = self.panel4d.reindex(major=new_major)
- assert_frame_equal(result['l2']['ItemB'], ref['ItemB'].reindex(index=new_major))
+ assert_frame_equal(
+ result['l2']['ItemB'], ref['ItemB'].reindex(index=new_major))
# raise exception put both major and major_axis
self.assertRaises(Exception, self.panel4d.reindex,
@@ -737,12 +749,13 @@ def test_reindex(self):
# minor
new_minor = list(self.panel4d.minor_axis[:2])
result = self.panel4d.reindex(minor=new_minor)
- assert_frame_equal(result['l2']['ItemB'], ref['ItemB'].reindex(columns=new_minor))
+ assert_frame_equal(
+ result['l2']['ItemB'], ref['ItemB'].reindex(columns=new_minor))
result = self.panel4d.reindex(labels=self.panel4d.labels,
- items =self.panel4d.items,
- major =self.panel4d.major_axis,
- minor =self.panel4d.minor_axis)
+ items=self.panel4d.items,
+ major=self.panel4d.major_axis,
+ minor=self.panel4d.minor_axis)
assert(result.labels is self.panel4d.labels)
assert(result.items is self.panel4d.items)
@@ -758,19 +771,20 @@ def test_reindex(self):
larger = smaller.reindex(major=self.panel4d.major_axis,
method='pad')
- assert_panel_equal(larger.ix[:,:,self.panel4d.major_axis[1],:],
- smaller.ix[:,:,smaller_major[0],:])
+ assert_panel_equal(larger.ix[:, :, self.panel4d.major_axis[1], :],
+ smaller.ix[:, :, smaller_major[0], :])
# don't necessarily copy
- result = self.panel4d.reindex(major=self.panel4d.major_axis, copy=False)
+ result = self.panel4d.reindex(
+ major=self.panel4d.major_axis, copy=False)
self.assert_(result is self.panel4d)
def test_reindex_like(self):
# reindex_like
smaller = self.panel4d.reindex(labels=self.panel4d.labels[:-1],
- items =self.panel4d.items[:-1],
- major =self.panel4d.major_axis[:-1],
- minor =self.panel4d.minor_axis[:-1])
+ items=self.panel4d.items[:-1],
+ major=self.panel4d.major_axis[:-1],
+ minor=self.panel4d.minor_axis[:-1])
smaller_like = self.panel4d.reindex_like(smaller)
assert_panel4d_equal(smaller, smaller_like)
@@ -793,7 +807,7 @@ def test_take(self):
def test_sort_index(self):
import random
- rlabels= list(self.panel4d.labels)
+ rlabels = list(self.panel4d.labels)
ritems = list(self.panel4d.items)
rmajor = list(self.panel4d.major_axis)
rminor = list(self.panel4d.minor_axis)
@@ -807,18 +821,18 @@ def test_sort_index(self):
assert_panel4d_equal(sorted_panel4d, self.panel4d)
# descending
- #random_order = self.panel.reindex(items=ritems)
- #sorted_panel = random_order.sort_index(axis=0, ascending=False)
- #assert_panel_equal(sorted_panel,
+ # random_order = self.panel.reindex(items=ritems)
+ # sorted_panel = random_order.sort_index(axis=0, ascending=False)
+ # assert_panel_equal(sorted_panel,
# self.panel.reindex(items=self.panel.items[::-1]))
- #random_order = self.panel.reindex(major=rmajor)
- #sorted_panel = random_order.sort_index(axis=1)
- #assert_panel_equal(sorted_panel, self.panel)
+ # random_order = self.panel.reindex(major=rmajor)
+ # sorted_panel = random_order.sort_index(axis=1)
+ # assert_panel_equal(sorted_panel, self.panel)
- #random_order = self.panel.reindex(minor=rminor)
- #sorted_panel = random_order.sort_index(axis=2)
- #assert_panel_equal(sorted_panel, self.panel)
+ # random_order = self.panel.reindex(minor=rminor)
+ # sorted_panel = random_order.sort_index(axis=2)
+ # assert_panel_equal(sorted_panel, self.panel)
def test_fillna(self):
filled = self.panel4d.fillna(0)
@@ -840,10 +854,10 @@ def test_fillna(self):
assert_panel4d_equal(filled, empty)
def test_swapaxes(self):
- result = self.panel4d.swapaxes('labels','items')
+ result = self.panel4d.swapaxes('labels', 'items')
self.assert_(result.items is self.panel4d.labels)
- result = self.panel4d.swapaxes('labels','minor')
+ result = self.panel4d.swapaxes('labels', 'minor')
self.assert_(result.labels is self.panel4d.minor_axis)
result = self.panel4d.swapaxes('items', 'minor')
@@ -985,9 +999,9 @@ def test_repr_empty(self):
def test_rename(self):
mapper = {
- 'l1' : 'foo',
- 'l2' : 'bar',
- 'l3' : 'baz'
+ 'l1': 'foo',
+ 'l2': 'bar',
+ 'l3': 'baz'
}
renamed = self.panel4d.rename_axis(mapper, axis=0)
@@ -1017,7 +1031,7 @@ def test_group_agg(self):
assert(agged[2][0] == 4.5)
# test a function that doesn't aggregate
- f2 = lambda x: np.zeros((2,2))
+ f2 = lambda x: np.zeros((2, 2))
self.assertRaises(Exception, group_agg, values, bounds, f2)
def test_from_frame_level1_unsorted(self):
@@ -1050,6 +1064,6 @@ def test_to_excel(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure',
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure',
'--with-timer'],
exit=False)
diff --git a/pandas/tests/test_panelnd.py b/pandas/tests/test_panelnd.py
index 1debfd54aac3c..5675cfec58678 100644
--- a/pandas/tests/test_panelnd.py
+++ b/pandas/tests/test_panelnd.py
@@ -18,6 +18,7 @@
assert_almost_equal)
import pandas.util.testing as tm
+
class TestPanelnd(unittest.TestCase):
def setUp(self):
@@ -27,85 +28,84 @@ def test_4d_construction(self):
# create a 4D
Panel4D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel4D',
- axis_orders = ['labels','items','major_axis','minor_axis'],
- axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis',
- 'minor_axis' : 'minor_axis' },
- slicer = Panel,
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
+ klass_name='Panel4D',
+ axis_orders=['labels', 'items', 'major_axis', 'minor_axis'],
+ axis_slices={'items': 'items', 'major_axis': 'major_axis',
+ 'minor_axis': 'minor_axis'},
+ slicer=Panel,
+ axis_aliases={'major': 'major_axis', 'minor': 'minor_axis'},
+ stat_axis=2)
- p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
+ p4d = Panel4D(dict(L1=tm.makePanel(), L2=tm.makePanel()))
def test_4d_construction_alt(self):
# create a 4D
Panel4D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel4D',
- axis_orders = ['labels','items','major_axis','minor_axis'],
- axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis',
- 'minor_axis' : 'minor_axis' },
- slicer = 'Panel',
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
+ klass_name='Panel4D',
+ axis_orders=['labels', 'items', 'major_axis', 'minor_axis'],
+ axis_slices={'items': 'items', 'major_axis': 'major_axis',
+ 'minor_axis': 'minor_axis'},
+ slicer='Panel',
+ axis_aliases={'major': 'major_axis', 'minor': 'minor_axis'},
+ stat_axis=2)
- p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
+ p4d = Panel4D(dict(L1=tm.makePanel(), L2=tm.makePanel()))
def test_4d_construction_error(self):
# create a 4D
self.assertRaises(Exception,
panelnd.create_nd_panel_factory,
- klass_name = 'Panel4D',
- axis_orders = ['labels', 'items', 'major_axis',
- 'minor_axis'],
- axis_slices = { 'items' : 'items',
- 'major_axis' : 'major_axis',
- 'minor_axis' : 'minor_axis' },
- slicer = 'foo',
- axis_aliases = { 'major' : 'major_axis',
- 'minor' : 'minor_axis' },
- stat_axis = 2)
-
+ klass_name='Panel4D',
+ axis_orders=['labels', 'items', 'major_axis',
+ 'minor_axis'],
+ axis_slices={'items': 'items',
+ 'major_axis': 'major_axis',
+ 'minor_axis': 'minor_axis'},
+ slicer='foo',
+ axis_aliases={'major': 'major_axis',
+ 'minor': 'minor_axis'},
+ stat_axis=2)
def test_5d_construction(self):
# create a 4D
Panel4D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel4D',
- axis_orders = ['labels1','items','major_axis','minor_axis'],
- axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis',
- 'minor_axis' : 'minor_axis' },
- slicer = Panel,
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
+ klass_name='Panel4D',
+ axis_orders=['labels1', 'items', 'major_axis', 'minor_axis'],
+ axis_slices={'items': 'items', 'major_axis': 'major_axis',
+ 'minor_axis': 'minor_axis'},
+ slicer=Panel,
+ axis_aliases={'major': 'major_axis', 'minor': 'minor_axis'},
+ stat_axis=2)
- p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
+ p4d = Panel4D(dict(L1=tm.makePanel(), L2=tm.makePanel()))
# create a 5D
Panel5D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel5D',
- axis_orders = [ 'cool1', 'labels1', 'items', 'major_axis',
- 'minor_axis'],
- axis_slices = { 'labels1' : 'labels1', 'items' : 'items',
- 'major_axis' : 'major_axis',
- 'minor_axis' : 'minor_axis' },
- slicer = Panel4D,
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
-
- p5d = Panel5D(dict(C1 = p4d))
+ klass_name='Panel5D',
+ axis_orders=['cool1', 'labels1', 'items', 'major_axis',
+ 'minor_axis'],
+ axis_slices={'labels1': 'labels1', 'items': 'items',
+ 'major_axis': 'major_axis',
+ 'minor_axis': 'minor_axis'},
+ slicer=Panel4D,
+ axis_aliases={'major': 'major_axis', 'minor': 'minor_axis'},
+ stat_axis=2)
+
+ p5d = Panel5D(dict(C1=p4d))
# slice back to 4d
- results = p5d.ix['C1',:,:,0:3,:]
- expected = p4d.ix[:,:,0:3,:]
+ results = p5d.ix['C1', :, :, 0:3, :]
+ expected = p4d.ix[:, :, 0:3, :]
assert_panel_equal(results['L1'], expected['L1'])
# test a transpose
- #results = p5d.transpose(1,2,3,4,0)
- #expected =
+ # results = p5d.transpose(1,2,3,4,0)
+ # expected =
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py
index a921518c25c95..278e745c7d312 100644
--- a/pandas/tests/test_reshape.py
+++ b/pandas/tests/test_reshape.py
@@ -19,6 +19,7 @@
_multiprocess_can_split_ = True
+
def test_melt():
df = tm.makeTimeDataFrame()[:10]
df['id1'] = (df['A'] > 0).astype(int)
@@ -32,30 +33,32 @@ def test_melt():
molten5 = melt(df, id_vars=['id1', 'id2'],
value_vars=['A', 'B'])
+
def test_convert_dummies():
- df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'foo', 'foo'],
- 'B' : ['one', 'one', 'two', 'three',
- 'two', 'two', 'one', 'three'],
- 'C' : np.random.randn(8),
- 'D' : np.random.randn(8)})
+ df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'three',
+ 'two', 'two', 'one', 'three'],
+ 'C': np.random.randn(8),
+ 'D': np.random.randn(8)})
result = convert_dummies(df, ['A', 'B'])
result2 = convert_dummies(df, ['A', 'B'], prefix_sep='.')
- expected = DataFrame({'A_foo' : [1, 0, 1, 0, 1, 0, 1, 1],
- 'A_bar' : [0, 1, 0, 1, 0, 1, 0, 0],
- 'B_one' : [1, 1, 0, 0, 0, 0, 1, 0],
- 'B_two' : [0, 0, 1, 0, 1, 1, 0, 0],
- 'B_three' : [0, 0, 0, 1, 0, 0, 0, 1],
- 'C' : df['C'].values,
- 'D' : df['D'].values},
+ expected = DataFrame({'A_foo': [1, 0, 1, 0, 1, 0, 1, 1],
+ 'A_bar': [0, 1, 0, 1, 0, 1, 0, 0],
+ 'B_one': [1, 1, 0, 0, 0, 0, 1, 0],
+ 'B_two': [0, 0, 1, 0, 1, 1, 0, 0],
+ 'B_three': [0, 0, 0, 1, 0, 0, 0, 1],
+ 'C': df['C'].values,
+ 'D': df['D'].values},
columns=result.columns, dtype=float)
expected2 = expected.rename(columns=lambda x: x.replace('_', '.'))
tm.assert_frame_equal(result, expected)
tm.assert_frame_equal(result2, expected2)
+
class Test_lreshape(unittest.TestCase):
def test_pairs(self):
@@ -130,5 +133,5 @@ def test_pairs(self):
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 0ac32371cdad7..335d747e960fb 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -27,6 +27,7 @@
from pandas.util.testing import assert_series_equal, assert_almost_equal
import pandas.util.testing as tm
+
def _skip_if_no_scipy():
try:
import scipy.stats
@@ -39,6 +40,7 @@ def _skip_if_no_scipy():
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
class CheckNameIntegration(object):
_multiprocess_can_split_ = True
@@ -109,7 +111,7 @@ def test_multilevel_name_print(self):
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
- s = Series(range(0,len(index)), index=index, name='sth')
+ s = Series(range(0, len(index)), index=index, name='sth')
expected = ["first second",
"foo one 0",
" two 1",
@@ -146,7 +148,7 @@ def test_name_printing(self):
s.name = None
self.assert_(not "Name:" in repr(s))
# test big series (diff code path)
- s = Series(range(0,1000))
+ s = Series(range(0, 1000))
s.name = "test"
self.assert_("Name: test" in repr(s))
s.name = None
@@ -174,6 +176,7 @@ def test_to_sparse_pass_name(self):
result = self.ts.to_sparse()
self.assertEquals(result.name, self.ts.name)
+
class TestNanops(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -216,15 +219,17 @@ def test_sum_zero(self):
self.assert_((df.sum(1) == 0).all())
def test_nansum_buglet(self):
- s = Series([1.0, np.nan], index=[0,1])
+ s = Series([1.0, np.nan], index=[0, 1])
result = np.nansum(s)
assert_almost_equal(result, 1)
+
class SafeForSparse(object):
pass
_ts = tm.makeTimeSeries()
+
class TestSeries(unittest.TestCase, CheckNameIntegration):
_multiprocess_can_split_ = True
@@ -354,31 +359,31 @@ def test_constructor_dtype_nocast(self):
def test_constructor_dtype_datetime64(self):
import pandas.tslib as tslib
- s = Series(tslib.iNaT,dtype='M8[ns]',index=range(5))
+ s = Series(tslib.iNaT, dtype='M8[ns]', index=range(5))
self.assert_(isnull(s).all() == True)
- s = Series(tslib.NaT,dtype='M8[ns]',index=range(5))
+ s = Series(tslib.NaT, dtype='M8[ns]', index=range(5))
self.assert_(isnull(s).all() == True)
- s = Series(nan,dtype='M8[ns]',index=range(5))
+ s = Series(nan, dtype='M8[ns]', index=range(5))
self.assert_(isnull(s).all() == True)
- s = Series([ datetime(2001,1,2,0,0), tslib.iNaT ],dtype='M8[ns]')
+ s = Series([datetime(2001, 1, 2, 0, 0), tslib.iNaT], dtype='M8[ns]')
self.assert_(isnull(s[1]) == True)
self.assert_(s.dtype == 'M8[ns]')
- s = Series([ datetime(2001,1,2,0,0), nan ],dtype='M8[ns]')
+ s = Series([datetime(2001, 1, 2, 0, 0), nan], dtype='M8[ns]')
self.assert_(isnull(s[1]) == True)
self.assert_(s.dtype == 'M8[ns]')
def test_constructor_dict(self):
- d = {'a' : 0., 'b' : 1., 'c' : 2.}
+ d = {'a': 0., 'b': 1., 'c': 2.}
result = Series(d, index=['b', 'c', 'd', 'a'])
expected = Series([1, 2, nan, 0], index=['b', 'c', 'd', 'a'])
assert_series_equal(result, expected)
pidx = tm.makePeriodIndex(100)
- d = {pidx[0] : 0, pidx[1] : 1}
+ d = {pidx[0]: 0, pidx[1]: 1}
result = Series(d, index=pidx)
expected = Series(np.nan, pidx)
expected.ix[0] = 0
@@ -407,20 +412,20 @@ def test_constructor_set(self):
self.assertRaises(TypeError, Series, values)
def test_fromDict(self):
- data = {'a' : 0, 'b' : 1, 'c' : 2, 'd' : 3}
+ data = {'a': 0, 'b': 1, 'c': 2, 'd': 3}
series = Series(data)
self.assert_(tm.is_sorted(series.index))
- data = {'a' : 0, 'b' : '1', 'c' : '2', 'd' : datetime.now()}
+ data = {'a': 0, 'b': '1', 'c': '2', 'd': datetime.now()}
series = Series(data)
self.assert_(series.dtype == np.object_)
- data = {'a' : 0, 'b' : '1', 'c' : '2', 'd' : '3'}
+ data = {'a': 0, 'b': '1', 'c': '2', 'd': '3'}
series = Series(data)
self.assert_(series.dtype == np.object_)
- data = {'a' : '0', 'b' : '1'}
+ data = {'a': '0', 'b': '1'}
series = Series(data, dtype=float)
self.assert_(series.dtype == np.float64)
@@ -470,7 +475,7 @@ def _check_all_orients(series, dtype=None):
_check_all_orients(self.ts)
# dtype
- s = Series(range(6), index=['a','b','c','d','e','f'])
+ s = Series(range(6), index=['a', 'b', 'c', 'd', 'e', 'f'])
_check_all_orients(Series(s, dtype=np.float64), dtype=np.float64)
_check_all_orients(Series(s, dtype=np.int), dtype=np.int)
@@ -596,8 +601,8 @@ def test_getitem_int64(self):
self.assertEqual(self.ts[idx], self.ts[5])
def test_getitem_fancy(self):
- slice1 = self.series[[1,2,3]]
- slice2 = self.objSeries[[1,2,3]]
+ slice1 = self.series[[1, 2, 3]]
+ slice2 = self.objSeries[[1, 2, 3]]
self.assertEqual(self.series.index[2], slice1.index[1])
self.assertEqual(self.objSeries.index[2], slice2.index[1])
self.assertEqual(self.series[2], slice1[1])
@@ -673,7 +678,7 @@ def test_getitem_out_of_bounds(self):
def test_getitem_setitem_integers(self):
# caused bug without test
- s = Series([1,2,3], ['a','b','c'])
+ s = Series([1, 2, 3], ['a', 'b', 'c'])
self.assertEqual(s.ix[0], s['a'])
s.ix[0] = 5
@@ -700,7 +705,7 @@ def test_setitem_ambiguous_keyerror(self):
def test_setitem_float_labels(self):
# note labels are floats
- s = Series(['a','b','c'],index=[0,0.5,1])
+ s = Series(['a', 'b', 'c'], index=[0, 0.5, 1])
tmp = s.copy()
s.ix[1] = 'zoo'
@@ -723,7 +728,7 @@ def test_slice(self):
self.assertEqual(numSlice.index[1], self.series.index[11])
self.assert_(tm.equalContents(numSliceEnd,
- np.array(self.series)[-10:]))
+ np.array(self.series)[-10:]))
# test return view
sl = self.series[10:20]
@@ -732,7 +737,7 @@ def test_slice(self):
def test_slice_can_reorder_not_uniquely_indexed(self):
s = Series(1, index=['a', 'a', 'b', 'b', 'c'])
- result = s[::-1] # it works!
+ result = s[::-1] # it works!
def test_slice_float_get_set(self):
result = self.ts[4.0:10.0]
@@ -746,12 +751,12 @@ def test_slice_float_get_set(self):
self.assertRaises(TypeError, self.ts.__setitem__, slice(4.5, 10.0), 0)
def test_slice_floats2(self):
- s = Series(np.random.rand(10), index=np.arange(10,20,dtype=float))
+ s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float))
self.assert_(len(s.ix[12.0:]) == 8)
self.assert_(len(s.ix[12.5:]) == 7)
- i = np.arange(10,20,dtype=float)
+ i = np.arange(10, 20, dtype=float)
i[2] = 12.2
s.index = i
self.assert_(len(s.ix[12.0:]) == 8)
@@ -783,7 +788,7 @@ def test_slice_float64(self):
def test_setitem(self):
self.ts[self.ts.index[5]] = np.NaN
- self.ts[[1,2,17]] = np.NaN
+ self.ts[[1, 2, 17]] = np.NaN
self.ts[6] = np.NaN
self.assert_(np.isnan(self.ts[6]))
self.assert_(np.isnan(self.ts[2]))
@@ -841,14 +846,14 @@ def test_reshape_non_2d(self):
def test_reshape_2d_return_array(self):
x = Series(np.random.random(201), name='x')
- result = x.reshape((-1,1))
+ result = x.reshape((-1, 1))
self.assert_(not isinstance(result, Series))
- result2 = np.reshape(x, (-1,1))
+ result2 = np.reshape(x, (-1, 1))
self.assert_(not isinstance(result, Series))
result = x[:, None]
- expected = x.reshape((-1,1))
+ expected = x.reshape((-1, 1))
assert_almost_equal(result, expected)
def test_basic_getitem_with_labels(self):
@@ -912,7 +917,7 @@ def test_basic_setitem_with_labels(self):
self.assertRaises(Exception, s.__setitem__, arr_inds_notfound, 0)
def test_ix_getitem(self):
- inds = self.series.index[[3,4,7]]
+ inds = self.series.index[[3, 4, 7]]
assert_series_equal(self.series.ix[inds], self.series.reindex(inds))
assert_series_equal(self.series.ix[5::2], self.series[5::2])
@@ -972,14 +977,14 @@ def test_where(self):
s = Series(np.random.randn(5))
cond = s > 0
- rs = s.where(cond).dropna()
+ rs = s.where(cond).dropna()
rs2 = s[cond]
assert_series_equal(rs, rs2)
- rs = s.where(cond,-s)
+ rs = s.where(cond, -s)
assert_series_equal(rs, s.abs())
- rs = s.where(cond)
+ rs = s.where(cond)
assert(s.shape == rs.shape)
assert(rs is not s)
@@ -1006,7 +1011,6 @@ def test_where_inplace(self):
rs.where(cond, -s, inplace=True)
assert_series_equal(rs, s.where(cond, -s))
-
def test_mask(self):
s = Series(np.random.randn(5))
cond = s > 0
@@ -1015,13 +1019,13 @@ def test_mask(self):
assert_series_equal(rs, s.mask(~cond))
def test_ix_setitem(self):
- inds = self.series.index[[3,4,7]]
+ inds = self.series.index[[3, 4, 7]]
result = self.series.copy()
result.ix[inds] = 5
expected = self.series.copy()
- expected[[3,4,7]] = 5
+ expected[[3, 4, 7]] = 5
assert_series_equal(result, expected)
result.ix[5:10] = 10
@@ -1031,7 +1035,7 @@ def test_ix_setitem(self):
# set slice with indices
d1, d2 = self.series.index[[5, 15]]
result.ix[d1:d2] = 6
- expected[5:16] = 6 # because it's inclusive
+ expected[5:16] = 6 # because it's inclusive
assert_series_equal(result, expected)
# set index value
@@ -1118,18 +1122,18 @@ def test_repr(self):
rep_str = repr(ser)
self.assert_("Name: 0" in rep_str)
- ser = Series(["a\n\r\tb"],name=["a\n\r\td"],index=["a\n\r\tf"])
+ ser = Series(["a\n\r\tb"], name=["a\n\r\td"], index=["a\n\r\tf"])
self.assertFalse("\t" in repr(ser))
self.assertFalse("\r" in repr(ser))
self.assertFalse("a\n" in repr(ser))
def test_tidy_repr(self):
- a=Series([u"\u05d0"]*1000)
- a.name= 'title1'
+ a = Series([u"\u05d0"] * 1000)
+ a.name = 'title1'
repr(a) # should not raise exception
def test_repr_bool_fails(self):
- s = Series([DataFrame(np.random.randn(2,2)) for i in range(5)])
+ s = Series([DataFrame(np.random.randn(2, 2)) for i in range(5)])
import sys
@@ -1152,7 +1156,7 @@ def test_repr_name_iterable_indexable(self):
s.name = (u"\u05d0",) * 2
repr(s)
- def test_repr_should_return_str (self):
+ def test_repr_should_return_str(self):
"""
http://docs.python.org/py3k/reference/datamodel.html#object.__repr__
http://docs.python.org/reference/datamodel.html#object.__repr__
@@ -1161,27 +1165,25 @@ def test_repr_should_return_str (self):
(str on py2.x, str (unicode) on py3)
"""
- data=[8,5,3,5]
- index1=[u"\u03c3",u"\u03c4",u"\u03c5",u"\u03c6"]
- df=Series(data,index=index1)
- self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
-
+ data = [8, 5, 3, 5]
+ index1 = [u"\u03c3", u"\u03c4", u"\u03c5", u"\u03c6"]
+ df = Series(data, index=index1)
+ self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
def test_unicode_string_with_unicode(self):
- df = Series([u"\u05d0"],name=u"\u05d1")
+ df = Series([u"\u05d0"], name=u"\u05d1")
if py3compat.PY3:
str(df)
else:
unicode(df)
def test_bytestring_with_unicode(self):
- df = Series([u"\u05d0"],name=u"\u05d1")
+ df = Series([u"\u05d0"], name=u"\u05d1")
if py3compat.PY3:
bytes(df)
else:
str(df)
-
def test_timeseries_repr_object_dtype(self):
index = Index([datetime(2000, 1, 1) + timedelta(i)
for i in range(1000)], dtype=object)
@@ -1191,7 +1193,7 @@ def test_timeseries_repr_object_dtype(self):
ts = tm.makeTimeSeries(1000)
self.assert_(repr(ts).splitlines()[-1].startswith('Freq:'))
- ts2 = ts.ix[np.random.randint(0, len(ts)-1, 400)]
+ ts2 = ts.ix[np.random.randint(0, len(ts) - 1, 400)]
repr(ts).splitlines()[-1]
def test_iter(self):
@@ -1282,7 +1284,7 @@ def test_skew(self):
_skip_if_no_scipy()
from scipy.stats import skew
- alt =lambda x: skew(x, bias=False)
+ alt = lambda x: skew(x, bias=False)
self._check_stat_op('skew', alt)
def test_kurt(self):
@@ -1326,8 +1328,8 @@ def test_cummin(self):
self.assert_(np.array_equal(self.ts.cummin(),
np.minimum.accumulate(np.array(self.ts))))
ts = self.ts.copy()
- ts[::2] = np.NaN
- result = ts.cummin()[1::2]
+ ts[::2] = np.NaN
+ result = ts.cummin()[1::2]
expected = np.minimum.accumulate(ts.valid())
self.assert_(np.array_equal(result, expected))
@@ -1336,8 +1338,8 @@ def test_cummax(self):
self.assert_(np.array_equal(self.ts.cummax(),
np.maximum.accumulate(np.array(self.ts))))
ts = self.ts.copy()
- ts[::2] = np.NaN
- result = ts.cummax()[1::2]
+ ts[::2] = np.NaN
+ result = ts.cummax()[1::2]
expected = np.maximum.accumulate(ts.valid())
self.assert_(np.array_equal(result, expected))
@@ -1410,7 +1412,7 @@ def test_round(self):
self.assertEqual(result.name, self.ts.name)
def test_prod_numpy16_bug(self):
- s = Series([1., 1., 1.] , index=range(3))
+ s = Series([1., 1., 1.], index=range(3))
result = s.prod()
self.assert_(not isinstance(result, Series))
@@ -1439,8 +1441,8 @@ def test_describe_percentiles(self):
def test_describe_objects(self):
s = Series(['a', 'b', 'b', np.nan, np.nan, np.nan, 'c', 'd', 'a', 'a'])
result = s.describe()
- expected = Series({'count' : 7, 'unique' : 4,
- 'top' : 'a', 'freq' : 3}, index=result.index)
+ expected = Series({'count': 7, 'unique': 4,
+ 'top': 'a', 'freq': 3}, index=result.index)
assert_series_equal(result, expected)
dt = list(self.ts.index)
@@ -1449,10 +1451,10 @@ def test_describe_objects(self):
rs = ser.describe()
min_date = min(dt)
max_date = max(dt)
- xp = Series({'count' : len(dt),
- 'unique' : len(self.ts.index),
- 'first' : min_date, 'last' : max_date, 'freq' : 2,
- 'top' : min_date}, index=rs.index)
+ xp = Series({'count': len(dt),
+ 'unique': len(self.ts.index),
+ 'first': min_date, 'last': max_date, 'freq': 2,
+ 'top': min_date}, index=rs.index)
assert_series_equal(rs, xp)
def test_describe_empty(self):
@@ -1558,7 +1560,7 @@ def check_comparators(series, other):
def test_operators_empty_int_corner(self):
s1 = Series([], [], dtype=np.int32)
- s2 = Series({'x' : 0.})
+ s2 = Series({'x': 0.})
# it works!
_ = s1 * s2
@@ -1575,7 +1577,7 @@ def test_operators_na_handling(self):
from decimal import Decimal
from datetime import date
s = Series([Decimal('1.3'), Decimal('2.3')],
- index=[date(2012,1,1), date(2012,1,2)])
+ index=[date(2012, 1, 1), date(2012, 1, 2)])
result = s + s.shift(1)
self.assert_(isnull(result[0]))
@@ -1700,7 +1702,7 @@ def test_between(self):
def test_setitem_na_exception(self):
def testme1():
- s = Series([2,3,4,5,6,7,8,9,10])
+ s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10])
s[::2] = np.nan
def testme2():
@@ -1716,19 +1718,19 @@ def testme3():
self.assertRaises(Exception, testme3)
def test_scalar_na_cmp_corners(self):
- s = Series([2,3,4,5,6,7,8,9,10])
+ s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10])
def tester(a, b):
return a & b
- self.assertRaises(ValueError, tester, s, datetime(2005,1,1))
+ self.assertRaises(ValueError, tester, s, datetime(2005, 1, 1))
- s = Series([2,3,4,5,6,7,8,9,datetime(2005,1,1)])
+ s = Series([2, 3, 4, 5, 6, 7, 8, 9, datetime(2005, 1, 1)])
s[::2] = np.nan
assert_series_equal(tester(s, list(s)), s)
- d = DataFrame({'A':s})
+ d = DataFrame({'A': s})
self.assertRaises(TypeError, tester, s, d)
def test_idxmin(self):
@@ -1826,9 +1828,9 @@ def test_series_frame_radd_bug(self):
expected = vals.map(lambda x: 'foo_' + x)
assert_series_equal(result, expected)
- frame = DataFrame({'vals' : vals})
+ frame = DataFrame({'vals': vals})
result = 'foo_' + frame
- expected = DataFrame({'vals' : vals.map(lambda x: 'foo_' + x)})
+ expected = DataFrame({'vals': vals.map(lambda x: 'foo_' + x)})
tm.assert_frame_equal(result, expected)
# really raise this time
@@ -1841,7 +1843,7 @@ def test_operators_frame(self):
sys.stderr = buf
# rpow does not work with DataFrame
try:
- df = DataFrame({'A' : self.ts})
+ df = DataFrame({'A': self.ts})
tm.assert_almost_equal(self.ts + self.ts, (self.ts + df)['A'])
tm.assert_almost_equal(self.ts ** self.ts, (self.ts ** df)['A'])
@@ -1990,12 +1992,12 @@ def test_corr_rank(self):
raise nose.SkipTest
# results from R
- A = Series([-0.89926396, 0.94209606, -1.03289164, -0.95445587,
+ A = Series([-0.89926396, 0.94209606, -1.03289164, -0.95445587,
0.76910310, -0.06430576, -2.09704447, 0.40660407,
- -0.89926396, 0.94209606])
- B = Series([-1.01270225, -0.62210117, -1.56895827, 0.59592943,
- -0.01680292, 1.17258718, -1.06009347, -0.10222060,
- -0.89076239, 0.89372375])
+ -0.89926396, 0.94209606])
+ B = Series([-1.01270225, -0.62210117, -1.56895827, 0.59592943,
+ -0.01680292, 1.17258718, -1.06009347, -0.10222060,
+ -0.89076239, 0.89372375])
kexp = 0.4319297
sexp = 0.5853767
self.assertAlmostEqual(A.corr(B, method='kendall'), kexp)
@@ -2003,10 +2005,11 @@ def test_corr_rank(self):
def test_cov(self):
# full overlap
- self.assertAlmostEqual(self.ts.cov(self.ts), self.ts.std()**2)
+ self.assertAlmostEqual(self.ts.cov(self.ts), self.ts.std() ** 2)
# partial overlap
- self.assertAlmostEqual(self.ts[:15].cov(self.ts[5:]), self.ts[5:15].std()**2)
+ self.assertAlmostEqual(
+ self.ts[:15].cov(self.ts[5:]), self.ts[5:15].std() ** 2)
# No overlap
self.assert_(np.isnan(self.ts[::2].cov(self.ts[1::2])))
@@ -2135,7 +2138,6 @@ def test_sort_index(self):
sorted_series = random_order.sort_index()
assert_series_equal(sorted_series, self.ts)
-
# descending
sorted_series = random_order.sort_index(ascending=False)
assert_series_equal(sorted_series,
@@ -2177,7 +2179,7 @@ def test_rank(self):
assert_series_equal(ranks, oranks)
- mask = np.isnan(self.ts)
+ mask = np.isnan(self.ts)
filled = self.ts.fillna(np.inf)
exp = rankdata(filled)
@@ -2208,10 +2210,11 @@ def test_from_csv(self):
outfile.write('1998-01-01|1.0\n1999-01-01|2.0')
outfile.close()
series = Series.from_csv(path, sep='|')
- checkseries = Series({datetime(1998,1,1): 1.0, datetime(1999,1,1): 2.0})
+ checkseries = Series(
+ {datetime(1998, 1, 1): 1.0, datetime(1999, 1, 1): 2.0})
assert_series_equal(checkseries, series)
- series = Series.from_csv(path, sep='|',parse_dates=False)
+ series = Series.from_csv(path, sep='|', parse_dates=False)
checkseries = Series({'1998-01-01': 1.0, '1999-01-01': 2.0})
assert_series_equal(checkseries, series)
@@ -2230,8 +2233,8 @@ def test_to_csv(self):
os.remove('_foo')
def test_to_csv_unicode_index(self):
- buf=StringIO()
- s=Series([u"\u05d0","d2"], index=[u"\u05d0",u"\u05d1"])
+ buf = StringIO()
+ s = Series([u"\u05d0", "d2"], index=[u"\u05d0", u"\u05d1"])
s.to_csv(buf, encoding='UTF-8')
buf.seek(0)
@@ -2245,7 +2248,7 @@ def test_tolist(self):
xp = self.ts.values.tolist()
assert_almost_equal(rs, xp)
- #datetime64
+ # datetime64
s = Series(self.ts.index)
rs = s.tolist()
xp = s.astype(object).values.tolist()
@@ -2266,7 +2269,7 @@ def test_to_csv_float_format(self):
os.remove(filename)
def test_to_csv_list_entries(self):
- s = Series(['jack and jill','jesse and frank'])
+ s = Series(['jack and jill', 'jesse and frank'])
split = s.str.split(r'\s+and\s+')
@@ -2297,16 +2300,18 @@ def test_valid(self):
tm.assert_dict_equal(result, ts, compare_keys=False)
def test_isnull(self):
- ser = Series([0,5.4,3,nan,-0.001])
- assert_series_equal(ser.isnull(), Series([False,False,False,True,False]))
- ser = Series(["hi","",nan])
- assert_series_equal(ser.isnull(), Series([False,False,True]))
+ ser = Series([0, 5.4, 3, nan, -0.001])
+ assert_series_equal(
+ ser.isnull(), Series([False, False, False, True, False]))
+ ser = Series(["hi", "", nan])
+ assert_series_equal(ser.isnull(), Series([False, False, True]))
def test_notnull(self):
- ser = Series([0,5.4,3,nan,-0.001])
- assert_series_equal(ser.notnull(), Series([True,True,True,False,True]))
- ser = Series(["hi","",nan])
- assert_series_equal(ser.notnull(), Series([True,True,False]))
+ ser = Series([0, 5.4, 3, nan, -0.001])
+ assert_series_equal(
+ ser.notnull(), Series([True, True, True, False, True]))
+ ser = Series(["hi", "", nan])
+ assert_series_equal(ser.notnull(), Series([True, True, False]))
def test_shift(self):
shifted = self.ts.shift(1)
@@ -2561,7 +2566,7 @@ def test_astype_cast_nan_int(self):
self.assertRaises(ValueError, df.astype, np.int64)
def test_astype_cast_object_int(self):
- arr = Series(["car", "house", "tree","1"])
+ arr = Series(["car", "house", "tree", "1"])
self.assertRaises(ValueError, arr.astype, int)
self.assertRaises(ValueError, arr.astype, np.int64)
@@ -2593,8 +2598,8 @@ def test_map(self):
self.assert_(np.array_equal(result, self.ts * 2))
def test_map_int(self):
- left = Series({'a' : 1., 'b' : 2., 'c' : 3., 'd' : 4})
- right = Series({1 : 11, 2 : 22, 3 : 33})
+ left = Series({'a': 1., 'b': 2., 'c': 3., 'd': 4})
+ right = Series({1: 11, 2: 22, 3: 33})
self.assert_(left.dtype == np.float_)
self.assert_(issubclass(right.dtype.type, np.integer))
@@ -2633,7 +2638,7 @@ def test_apply(self):
# how to handle Series result, #2316
result = self.ts.apply(lambda x: Series([x, x ** 2],
index=['x', 'x^2']))
- expected = DataFrame({'x': self.ts, 'x^2': self.ts **2})
+ expected = DataFrame({'x': self.ts, 'x^2': self.ts ** 2})
tm.assert_frame_equal(result, expected)
def test_apply_same_length_inference_bug(self):
@@ -2934,7 +2939,7 @@ def test_rename(self):
# partial dict
s = Series(np.arange(4), index=['a', 'b', 'c', 'd'])
- renamed = s.rename({'b' : 'foo', 'd' : 'bar'})
+ renamed = s.rename({'b': 'foo', 'd': 'bar'})
self.assert_(np.array_equal(renamed.index, ['a', 'foo', 'c', 'bar']))
def test_rename_inplace(self):
@@ -2946,7 +2951,7 @@ def test_rename_inplace(self):
self.assertEqual(self.ts.index[0], expected)
def test_preserveRefs(self):
- seq = self.ts[[5,10,15]]
+ seq = self.ts[[5, 10, 15]]
seq[1] = np.NaN
self.assertFalse(np.isnan(self.ts[10]))
@@ -2964,7 +2969,7 @@ def test_pad_nan(self):
self.assertTrue(res is None)
expected = TimeSeries([np.nan, 1.0, 1.0, 3.0, 3.0],
- ['z', 'a', 'b', 'c', 'd'], dtype=float)
+ ['z', 'a', 'b', 'c', 'd'], dtype=float)
assert_series_equal(x[1:], expected[1:])
self.assert_(np.isnan(x[0]), np.isnan(expected[0]))
@@ -2995,7 +3000,7 @@ def test_unstack(self):
exp_index = MultiIndex(levels=[['one', 'two', 'three'], [0, 1]],
labels=[[0, 1, 2, 0, 1, 2],
[0, 1, 0, 1, 0, 1]])
- expected = DataFrame({'bar' : s.values}, index=exp_index).sortlevel(0)
+ expected = DataFrame({'bar': s.values}, index=exp_index).sortlevel(0)
unstacked = s.unstack(0)
assert_frame_equal(unstacked, expected)
@@ -3025,7 +3030,8 @@ def test_fillna(self):
ts[2] = np.NaN
- self.assert_(np.array_equal(ts.fillna(method='ffill'), [0., 1., 1., 3., 4.]))
+ self.assert_(
+ np.array_equal(ts.fillna(method='ffill'), [0., 1., 1., 3., 4.]))
self.assert_(np.array_equal(ts.fillna(method='backfill'),
[0., 1., 3., 3., 4.]))
@@ -3035,7 +3041,7 @@ def test_fillna(self):
self.assertRaises(ValueError, self.ts.fillna, value=0, method='ffill')
def test_fillna_bug(self):
- x = Series([nan, 1., nan, 3., nan],['z','a','b','c','d'])
+ x = Series([nan, 1., nan, 3., nan], ['z', 'a', 'b', 'c', 'd'])
filled = x.fillna(method='ffill')
expected = Series([nan, 1., 1., 3., 3.], x.index)
assert_series_equal(filled, expected)
@@ -3045,7 +3051,7 @@ def test_fillna_bug(self):
assert_series_equal(filled, expected)
def test_fillna_inplace(self):
- x = Series([nan, 1., nan, 3., nan],['z','a','b','c','d'])
+ x = Series([nan, 1., nan, 3., nan], ['z', 'a', 'b', 'c', 'd'])
y = x.copy()
res = y.fillna(value=0, inplace=True)
@@ -3102,7 +3108,7 @@ def test_replace(self):
self.assert_((isnull(ser[:5])).all())
# replace with different values
- rs = ser.replace({np.nan : -1, 'foo' : -2, 'bar' : -3})
+ rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3})
self.assert_((rs[:5] == -1).all())
self.assert_((rs[6:10] == -2).all())
@@ -3145,8 +3151,8 @@ def test_replace(self):
assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0))
# malformed
- self.assertRaises(ValueError, ser.replace, [1,2,3], [np.nan, 0])
- self.assertRaises(ValueError, ser.replace, xrange(1,3), [np.nan, 0])
+ self.assertRaises(ValueError, ser.replace, [1, 2, 3], [np.nan, 0])
+ self.assertRaises(ValueError, ser.replace, xrange(1, 3), [np.nan, 0])
ser = Series([0, 1, 2, 3, 4])
result = ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])
@@ -3225,7 +3231,7 @@ def test_diff(self):
rs = s.diff()
self.assertEqual(rs[1], 1)
- #neg n
+ # neg n
rs = self.ts.diff(-1)
xp = self.ts - self.ts.shift(-1)
assert_series_equal(rs, xp)
@@ -3250,7 +3256,7 @@ def test_pct_change_shift_over_nas(self):
s = Series([1., 1.5, np.nan, 2.5, 3.])
chg = s.pct_change()
- expected = Series([np.nan, 0.5, np.nan, 2.5/1.5 -1, .2])
+ expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2])
assert_series_equal(chg, expected)
def test_autocorr(self):
@@ -3287,7 +3293,7 @@ def test_mpl_compat_hack(self):
def test_select(self):
n = len(self.ts)
result = self.ts.select(lambda x: x >= self.ts.index[n // 2])
- expected = self.ts.reindex(self.ts.index[n//2:])
+ expected = self.ts.reindex(self.ts.index[n // 2:])
assert_series_equal(result, expected)
result = self.ts.select(lambda x: x.weekday() == 2)
@@ -3306,6 +3312,7 @@ def test_numpy_unique(self):
# it works!
result = np.unique(self.ts)
+
class TestSeriesNonUnique(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -3378,14 +3385,14 @@ def test_reset_index(self):
df = ser.reset_index(name='value2')
self.assert_('value2' in df)
- #check inplace
+ # check inplace
s = ser.reset_index(drop=True)
s2 = ser
res = s2.reset_index(drop=True, inplace=True)
self.assertTrue(res is None)
assert_series_equal(s, s2)
- #level
+ # level
index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]],
labels=[[0, 0, 0, 0, 0, 0],
[0, 1, 2, 0, 1, 2],
@@ -3429,7 +3436,7 @@ def test_replace(self):
self.assert_((isnull(ser[:5])).all())
# replace with different values
- rs = ser.replace({np.nan : -1, 'foo' : -2, 'bar' : -3})
+ rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3})
self.assert_((rs[:5] == -1).all())
self.assert_((rs[6:10] == -2).all())
@@ -3471,9 +3478,9 @@ def test_repeat(self):
def test_unique_data_ownership(self):
# it works! #1807
- Series(Series(["a","c","b"]).unique()).sort()
+ Series(Series(["a", "c", "b"]).unique()).sort()
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_stats.py b/pandas/tests/test_stats.py
index ba37355afec7a..0432d11aaa254 100644
--- a/pandas/tests/test_stats.py
+++ b/pandas/tests/test_stats.py
@@ -11,6 +11,7 @@
assert_series_equal,
assert_almost_equal)
+
class TestRank(unittest.TestCase):
_multiprocess_can_split_ = True
s = Series([1, 3, 4, 2, nan, 2, 1, 5, nan, 3])
@@ -113,5 +114,5 @@ def test_rank_int(self):
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 9d803b3fd662a..910bfda73abe5 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -20,6 +20,7 @@
import pandas.core.strings as strings
+
class TestStringMethods(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -74,7 +75,7 @@ def test_count(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = [u'foo', u'foofoo', NA, u'foooofooofommmfoo']
result = strings.str_count(values, 'f[o]+')
@@ -99,7 +100,7 @@ def test_contains(self):
self.assert_(result.dtype == np.bool_)
tm.assert_almost_equal(result, expected)
- #mixed
+ # mixed
mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.]
rs = strings.str_contains(mixed, 'o')
xp = [False, NA, False, NA, NA, True, NA, NA, NA]
@@ -109,7 +110,7 @@ def test_contains(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = [u'foo', NA, u'fooommm__foo', u'mmm_']
pat = 'mmm[_]+'
@@ -134,7 +135,7 @@ def test_startswith(self):
exp = Series([False, NA, True, False, False, NA, True])
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.]
rs = strings.str_startswith(mixed, 'f')
xp = [False, NA, False, NA, NA, True, NA, NA, NA]
@@ -144,7 +145,7 @@ def test_startswith(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'om', NA, u'foo_nom', u'nom', u'bar_foo', NA,
u'foo'])
@@ -162,7 +163,7 @@ def test_endswith(self):
exp = Series([False, NA, False, False, True, NA, True])
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.]
rs = strings.str_endswith(mixed, 'f')
xp = [False, NA, False, NA, NA, False, NA, NA, NA]
@@ -172,7 +173,7 @@ def test_endswith(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'om', NA, u'foo_nom', u'nom', u'bar_foo', NA,
u'foo'])
@@ -193,7 +194,7 @@ def test_lower_upper(self):
result = result.str.lower()
tm.assert_series_equal(result, values)
- #mixed
+ # mixed
mixed = Series(['a', NA, 'b', True, datetime.today(), 'foo', None,
1, 2.])
mixed = mixed.str.upper()
@@ -202,7 +203,7 @@ def test_lower_upper(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'om', NA, u'nom', u'nom'])
result = values.str.upper()
@@ -223,7 +224,7 @@ def test_replace(self):
exp = Series(['foobarBAD', NA])
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['aBAD', NA, 'bBAD', True, datetime.today(), 'fooBAD',
None, 1, 2.])
@@ -232,7 +233,7 @@ def test_replace(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'fooBAD__barBAD', NA])
result = values.str.replace('BAD[_]*', '')
@@ -254,7 +255,7 @@ def test_repeat(self):
exp = Series(['a', 'bb', NA, 'cccc', NA, 'dddddd'])
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['a', NA, 'b', True, datetime.today(), 'foo',
None, 1, 2.])
@@ -263,7 +264,7 @@ def test_repeat(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'a', u'b', NA, u'c', NA, u'd'])
result = values.str.repeat(3)
@@ -274,7 +275,6 @@ def test_repeat(self):
exp = Series([u'a', u'bb', NA, u'cccc', NA, u'dddddd'])
tm.assert_series_equal(result, exp)
-
def test_match(self):
values = Series(['fooBAD__barBAD', NA, 'foo'])
@@ -282,7 +282,7 @@ def test_match(self):
exp = Series([('BAD__', 'BAD'), NA, []])
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['aBAD_BAD', NA, 'BAD_b_BAD', True, datetime.today(),
'foo', None, 1, 2.])
@@ -291,7 +291,7 @@ def test_match(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'fooBAD__barBAD', NA, u'foo'])
result = values.str.match('.*(BAD[_]+).*(BAD)')
@@ -303,7 +303,7 @@ def test_join(self):
result = values.str.split('_').str.join('_')
tm.assert_series_equal(values, result)
- #mixed
+ # mixed
mixed = Series(['a_b', NA, 'asdf_cas_asdf', True, datetime.today(),
'foo', None, 1, 2.])
@@ -313,7 +313,7 @@ def test_join(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'a_b_c', u'c_d_e', np.nan, u'f_g_h'])
result = values.str.split('_').str.join('_')
tm.assert_series_equal(values, result)
@@ -325,7 +325,7 @@ def test_len(self):
exp = values.map(lambda x: len(x) if com.notnull(x) else NA)
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['a_b', NA, 'asdf_cas_asdf', True, datetime.today(),
'foo', None, 1, 2.])
@@ -335,7 +335,7 @@ def test_len(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'foo', u'fooo', u'fooooo', np.nan, u'fooooooo'])
result = values.str.len()
@@ -349,7 +349,7 @@ def test_findall(self):
exp = Series([['BAD__', 'BAD'], NA, [], ['BAD']])
tm.assert_almost_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['fooBAD__barBAD', NA, 'foo', True, datetime.today(),
'BAD', None, 1, 2.])
@@ -359,7 +359,7 @@ def test_findall(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'fooBAD__barBAD', NA, u'foo', u'BAD'])
result = values.str.findall('BAD[_]*')
@@ -381,7 +381,7 @@ def test_pad(self):
exp = Series([' a ', ' b ', NA, ' c ', NA, 'eeeeee'])
tm.assert_almost_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['a', NA, 'b', True, datetime.today(),
'ee', None, 1, 2.])
@@ -409,7 +409,7 @@ def test_pad(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'a', u'b', NA, u'c', NA, u'eeeeee'])
result = values.str.pad(5, side='left')
@@ -431,7 +431,7 @@ def test_center(self):
exp = Series([' a ', ' b ', NA, ' c ', NA, 'eeeeee'])
tm.assert_almost_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['a', NA, 'b', True, datetime.today(),
'c', 'eee', None, 1, 2.])
@@ -442,7 +442,7 @@ def test_center(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'a', u'b', NA, u'c', NA, u'eeeeee'])
result = values.str.center(5)
@@ -456,12 +456,12 @@ def test_split(self):
exp = Series([['a', 'b', 'c'], ['c', 'd', 'e'], NA, ['f', 'g', 'h']])
tm.assert_series_equal(result, exp)
- #more than one char
+ # more than one char
values = Series(['a__b__c', 'c__d__e', NA, 'f__g__h'])
result = values.str.split('__')
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['a_b_c', NA, 'd_e_f', True, datetime.today(),
None, 1, 2.])
@@ -472,7 +472,7 @@ def test_split(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'a_b_c', u'c_d_e', NA, u'f_g_h'])
result = values.str.split('_')
@@ -488,7 +488,7 @@ def test_split_noargs(self):
self.assertEquals(result[1], ['Travis', 'Oliphant'])
def test_split_maxsplit(self):
- #re.split 0, str.split -1
+ # re.split 0, str.split -1
s = Series(['bd asdf jfg', 'kjasdflqw asdfnfk'])
result = s.str.split(n=-1)
@@ -520,13 +520,13 @@ def test_pipe_failures(self):
tm.assert_series_equal(result, exp)
def test_slice(self):
- values = Series(['aafootwo','aabartwo', NA, 'aabazqux'])
+ values = Series(['aafootwo', 'aabartwo', NA, 'aabazqux'])
result = values.str.slice(2, 5)
exp = Series(['foo', 'bar', NA, 'baz'])
tm.assert_series_equal(result, exp)
- #mixed
+ # mixed
mixed = Series(['aafootwo', NA, 'aabartwo', True, datetime.today(),
None, 1, 2.])
@@ -537,7 +537,7 @@ def test_slice(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'aafootwo', u'aabartwo', NA, u'aabazqux'])
result = values.str.slice(2, 5)
@@ -563,7 +563,7 @@ def test_strip_lstrip_rstrip(self):
tm.assert_series_equal(result, exp)
def test_strip_lstrip_rstrip_mixed(self):
- #mixed
+ # mixed
mixed = Series([' aa ', NA, ' bb \t\n', True, datetime.today(),
None, 1, 2.])
@@ -589,7 +589,7 @@ def test_strip_lstrip_rstrip_mixed(self):
tm.assert_almost_equal(rs, xp)
def test_strip_lstrip_rstrip_unicode(self):
- #unicode
+ # unicode
values = Series([u' aa ', u' bb \n', NA, u'cc '])
result = values.str.strip()
@@ -644,7 +644,7 @@ def test_get(self):
expected = Series(['b', 'd', np.nan, 'g'])
tm.assert_series_equal(result, expected)
- #mixed
+ # mixed
mixed = Series(['a_b_c', NA, 'c_d_e', True, datetime.today(),
None, 1, 2.])
@@ -655,7 +655,7 @@ def test_get(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
- #unicode
+ # unicode
values = Series([u'a_b_c', u'c_d_e', np.nan, u'f_g_h'])
result = values.str.split('_').str.get(1)
@@ -711,7 +711,7 @@ def test_more_replace(self):
assert_series_equal(result, expected)
result = s.str.replace('^.a|dog', 'XX-XX ', case=False)
- expected = Series(['A', 'B', 'C', 'XX-XX ba', 'XX-XX ca', '', NA,
+ expected = Series(['A', 'B', 'C', 'XX-XX ba', 'XX-XX ca', '', NA,
'XX-XX BA', 'XX-XX ', 'XX-XX t'])
assert_series_equal(result, expected)
@@ -779,5 +779,5 @@ def test_encode_decode_errors(self):
tm.assert_series_equal(result, exp)
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
index 3492ca038097d..7e5341fd5b311 100644
--- a/pandas/tests/test_tseries.py
+++ b/pandas/tests/test_tseries.py
@@ -9,8 +9,10 @@
import pandas.algos as algos
from datetime import datetime
+
class TestTseriesUtil(unittest.TestCase):
_multiprocess_can_split_ = True
+
def test_combineFunc(self):
pass
@@ -59,6 +61,7 @@ def test_pad(self):
expect_filler = [-1, -1, -1, -1, -1]
self.assert_(np.array_equal(filler, expect_filler))
+
def test_left_join_indexer_unique():
a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
b = np.array([2, 2, 3, 4, 4], dtype=np.int64)
@@ -67,6 +70,7 @@ def test_left_join_indexer_unique():
expected = np.array([1, 1, 2, 3, 3], dtype=np.int64)
assert(np.array_equal(result, expected))
+
def test_left_outer_join_bug():
left = np.array([0, 1, 0, 1, 1, 2, 3, 1, 0, 2, 1, 2, 0, 1, 1, 2, 3, 2, 3,
2, 1, 1, 3, 0, 3, 2, 3, 0, 0, 2, 3, 2, 0, 3, 1, 3, 0, 1,
@@ -88,6 +92,7 @@ def test_left_outer_join_bug():
assert(np.array_equal(lidx, exp_lidx))
assert(np.array_equal(ridx, exp_ridx))
+
def test_inner_join_indexer():
a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
@@ -110,6 +115,7 @@ def test_inner_join_indexer():
assert_almost_equal(ares, [0])
assert_almost_equal(bres, [0])
+
def test_outer_join_indexer():
a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
@@ -132,6 +138,7 @@ def test_outer_join_indexer():
assert_almost_equal(ares, [0])
assert_almost_equal(bres, [0])
+
def test_left_join_indexer():
a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
@@ -153,9 +160,10 @@ def test_left_join_indexer():
assert_almost_equal(ares, [0])
assert_almost_equal(bres, [0])
+
def test_left_join_indexer2():
- idx = Index([1,1,2,5])
- idx2 = Index([1,2,5,7,9])
+ idx = Index([1, 1, 2, 5])
+ idx2 = Index([1, 2, 5, 7, 9])
res, lidx, ridx = algos.left_join_indexer_int64(idx2, idx)
@@ -168,9 +176,10 @@ def test_left_join_indexer2():
exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
assert_almost_equal(ridx, exp_ridx)
+
def test_outer_join_indexer2():
- idx = Index([1,1,2,5])
- idx2 = Index([1,2,5,7,9])
+ idx = Index([1, 1, 2, 5])
+ idx2 = Index([1, 2, 5, 7, 9])
res, lidx, ridx = algos.outer_join_indexer_int64(idx2, idx)
@@ -183,9 +192,10 @@ def test_outer_join_indexer2():
exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
assert_almost_equal(ridx, exp_ridx)
+
def test_inner_join_indexer2():
- idx = Index([1,1,2,5])
- idx2 = Index([1,2,5,7,9])
+ idx = Index([1, 1, 2, 5])
+ idx2 = Index([1, 2, 5, 7, 9])
res, lidx, ridx = algos.inner_join_indexer_int64(idx2, idx)
@@ -203,21 +213,21 @@ def test_is_lexsorted():
failure = [
np.array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3,
- 3, 3,
- 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
- 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1,
- 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
- 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0, 0]),
+ 3, 3,
+ 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
+ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1,
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
+ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0]),
np.array([30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16,
15, 14,
- 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28,
- 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11,
- 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25,
- 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,
- 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, 24, 23, 22,
- 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5,
- 4, 3, 2, 1, 0])]
+ 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28,
+ 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11,
+ 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25,
+ 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,
+ 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, 24, 23, 22,
+ 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5,
+ 4, 3, 2, 1, 0])]
assert(not algos.is_lexsorted(failure))
@@ -230,6 +240,7 @@ def test_is_lexsorted():
# assert(np.array_equal(result, expected))
+
def test_groupsort_indexer():
a = np.random.randint(0, 1000, 100).astype(np.int64)
b = np.random.randint(0, 1000, 100).astype(np.int64)
@@ -246,12 +257,14 @@ def test_groupsort_indexer():
expected = np.lexsort((b, a))
assert(np.array_equal(result, expected))
+
def test_ensure_platform_int():
arr = np.arange(100)
result = algos.ensure_platform_int(arr)
assert(result is arr)
+
def test_duplicated_with_nas():
keys = np.array([0, 1, nan, 0, 2, nan], dtype=object)
@@ -264,7 +277,7 @@ def test_duplicated_with_nas():
assert(np.array_equal(result, expected))
keys = np.empty(8, dtype=object)
- for i, t in enumerate(zip([0, 0, nan, nan]*2, [0, nan, 0, nan]*2)):
+ for i, t in enumerate(zip([0, 0, nan, nan] * 2, [0, nan, 0, nan] * 2)):
keys[i] = t
result = lib.duplicated(keys)
@@ -277,6 +290,7 @@ def test_duplicated_with_nas():
expected = trues + falses
assert(np.array_equal(result, expected))
+
def test_maybe_booleans_to_slice():
arr = np.array([0, 0, 1, 1, 1, 0, 1], dtype=np.uint8)
result = lib.maybe_booleans_to_slice(arr)
@@ -285,11 +299,13 @@ def test_maybe_booleans_to_slice():
result = lib.maybe_booleans_to_slice(arr[:0])
assert(result == slice(0, 0))
+
def test_convert_objects():
arr = np.array(['a', 'b', nan, nan, 'd', 'e', 'f'], dtype='O')
result = lib.maybe_convert_objects(arr)
assert(result.dtype == np.object_)
+
def test_convert_infs():
arr = np.array(['inf', 'inf', 'inf'], dtype='O')
result = lib.maybe_convert_numeric(arr, set(), False)
@@ -310,6 +326,7 @@ def test_convert_objects_ints():
result = lib.maybe_convert_objects(arr)
assert(issubclass(result.dtype.type, np.integer))
+
def test_convert_objects_complex_number():
for dtype in np.sctypes['complex']:
arr = np.array(list(1j * np.arange(20, dtype=dtype)), dtype='O')
@@ -317,6 +334,7 @@ def test_convert_objects_complex_number():
result = lib.maybe_convert_objects(arr)
assert(issubclass(result.dtype.type, np.complexfloating))
+
def test_rank():
from pandas.compat.scipy import rankdata
@@ -332,12 +350,14 @@ def _check(arr):
_check(np.array([nan, nan, 5., 5., 5., nan, 1, 2, 3, nan]))
_check(np.array([4., nan, 5., 5., 5., nan, 1, 2, 4., nan]))
+
def test_get_reverse_indexer():
indexer = np.array([-1, -1, 1, 2, 0, -1, 3, 4], dtype=np.int64)
result = lib.get_reverse_indexer(indexer, 5)
expected = np.array([4, 2, 3, 6, 7], dtype=np.int64)
assert(np.array_equal(result, expected))
+
def test_pad_backfill_object_segfault():
from datetime import datetime
old = np.array([], dtype='O')
@@ -359,11 +379,13 @@ def test_pad_backfill_object_segfault():
expected = np.array([], dtype=np.int64)
assert(np.array_equal(result, expected))
+
def test_arrmap():
values = np.array(['foo', 'foo', 'bar', 'bar', 'baz', 'qux'], dtype='O')
result = algos.arrmap_object(values, lambda x: x in ['foo', 'bar'])
assert(result.dtype == np.bool_)
+
def test_series_grouper():
from pandas import Series
obj = Series(np.random.randn(10))
@@ -380,6 +402,7 @@ def test_series_grouper():
exp_counts = np.array([3, 4], dtype=np.int64)
assert_almost_equal(counts, exp_counts)
+
def test_series_bin_grouper():
from pandas import Series
obj = Series(np.random.randn(10))
@@ -396,8 +419,10 @@ def test_series_bin_grouper():
exp_counts = np.array([3, 3, 4], dtype=np.int64)
assert_almost_equal(counts, exp_counts)
+
class TestBinGroupers(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.obj = np.random.randn(10, 1)
self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64)
@@ -405,8 +430,8 @@ def setUp(self):
def test_generate_bins(self):
from pandas.core.groupby import generate_bins_generic
- values = np.array([1,2,3,4,5,6], dtype=np.int64)
- binner = np.array([0,3,6,9], dtype=np.int64)
+ values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
+ binner = np.array([0, 3, 6, 9], dtype=np.int64)
for func in [lib.generate_bins_dt64, generate_bins_generic]:
bins = func(values, binner, closed='left')
@@ -416,8 +441,8 @@ def test_generate_bins(self):
assert((bins == np.array([3, 6, 6])).all())
for func in [lib.generate_bins_dt64, generate_bins_generic]:
- values = np.array([1,2,3,4,5,6], dtype=np.int64)
- binner = np.array([0,3,6], dtype=np.int64)
+ values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
+ binner = np.array([0, 3, 6], dtype=np.int64)
bins = func(values, binner, closed='right')
assert((bins == np.array([3, 6])).all())
@@ -441,7 +466,7 @@ def test_group_bin_functions(self):
'prod': np.prod,
'min': np.min,
'max': np.max,
- 'var': lambda x: x.var(ddof=1) if len(x) >=2 else np.nan
+ 'var': lambda x: x.var(ddof=1) if len(x) >= 2 else np.nan
}
for fname in funcs:
@@ -459,14 +484,14 @@ def _check_versions(self, irr_func, bin_func, np_func):
# bin-based version
bins = np.array([3, 6], dtype=np.int64)
- out = np.zeros((3, 1), np.float64)
+ out = np.zeros((3, 1), np.float64)
counts = np.zeros(len(out), dtype=np.int64)
bin_func(out, counts, obj, bins)
assert_almost_equal(out, exp)
bins = np.array([3, 9, 10], dtype=np.int64)
- out = np.zeros((3, 1), np.float64)
+ out = np.zeros((3, 1), np.float64)
counts = np.zeros(len(out), dtype=np.int64)
bin_func(out, counts, obj, bins)
exp = np.array([np_func(obj[:3]), np_func(obj[3:9]),
@@ -476,7 +501,7 @@ def _check_versions(self, irr_func, bin_func, np_func):
# duplicate bins
bins = np.array([3, 6, 10, 10], dtype=np.int64)
- out = np.zeros((4, 1), np.float64)
+ out = np.zeros((4, 1), np.float64)
counts = np.zeros(len(out), dtype=np.int64)
bin_func(out, counts, obj, bins)
exp = np.array([np_func(obj[:3]), np_func(obj[3:6]),
@@ -489,7 +514,7 @@ def test_group_ohlc():
obj = np.random.randn(20)
bins = np.array([6, 12], dtype=np.int64)
- out = np.zeros((3, 4), np.float64)
+ out = np.zeros((3, 4), np.float64)
counts = np.zeros(len(out), dtype=np.int64)
algos.group_ohlc(out, counts, obj[:, None], bins)
@@ -510,6 +535,7 @@ def _ohlc(group):
expected[0] = nan
assert_almost_equal(out, expected)
+
def test_try_parse_dates():
from dateutil.parser import parse
@@ -522,6 +548,7 @@ def test_try_parse_dates():
class TestTypeInference(unittest.TestCase):
_multiprocess_can_split_ = True
+
def test_length_zero(self):
result = lib.infer_dtype(np.array([], dtype='i4'))
self.assertEqual(result, 'empty')
@@ -597,7 +624,7 @@ def test_date(self):
self.assert_(index.inferred_type == 'date')
def test_to_object_array_tuples(self):
- r = (5,6)
+ r = (5, 6)
values = [r]
result = lib.to_object_array_tuples(values)
@@ -632,7 +659,8 @@ def test_int_index(self):
assert_almost_equal(result, expected)
dummy = Series(0., index=np.arange(100))
- result = lib.reduce(arr, np.sum, dummy=dummy, labels=Index(np.arange(4)))
+ result = lib.reduce(
+ arr, np.sum, dummy=dummy, labels=Index(np.arange(4)))
expected = arr.sum(0)
assert_almost_equal(result, expected)
@@ -644,5 +672,5 @@ def test_int_index(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index e2086fc84de2c..072acb1947c1f 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -34,7 +34,8 @@ def merge(left, right, how='inner', on=None, left_on=None, right_on=None,
right_index=right_index, sort=sort, suffixes=suffixes,
copy=copy)
return op.get_result()
-if __debug__: merge.__doc__ = _merge_doc % '\nleft : DataFrame'
+if __debug__:
+ merge.__doc__ = _merge_doc % '\nleft : DataFrame'
class MergeError(Exception):
@@ -144,11 +145,8 @@ def _merger(x, y):
return _merger(left, right)
-
# TODO: transformations??
# TODO: only copy DataFrames when modification necessary
-
-
class _MergeOperation(object):
"""
Perform a database (SQL) merge operation between two DataFrame objects
@@ -216,8 +214,9 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
continue
right_na_indexer = right_indexer.take(na_indexer)
- key_col.put(na_indexer, com.take_1d(self.right_join_keys[i],
- right_na_indexer))
+ key_col.put(
+ na_indexer, com.take_1d(self.right_join_keys[i],
+ right_na_indexer))
elif name in self.right and right_indexer is not None:
na_indexer = (right_indexer == -1).nonzero()[0]
if len(na_indexer) == 0:
@@ -368,7 +367,7 @@ def _get_merge_keys(self):
def _validate_specification(self):
# Hm, any way to make this logic less complicated??
if (self.on is None and self.left_on is None
- and self.right_on is None):
+ and self.right_on is None):
if self.left_index and self.right_index:
self.left_on, self.right_on = (), ()
@@ -380,14 +379,15 @@ def _validate_specification(self):
raise MergeError('Must pass left_on or left_index=True')
else:
# use the common columns
- common_cols = self.left.columns.intersection(self.right.columns)
+ common_cols = self.left.columns.intersection(
+ self.right.columns)
if len(common_cols) == 0:
raise MergeError('No common columns to perform merge on')
self.left_on = self.right_on = common_cols
elif self.on is not None:
if self.left_on is not None or self.right_on is not None:
raise MergeError('Can only pass on OR left_on and '
- 'right_on')
+ 'right_on')
self.left_on = self.right_on = self.on
elif self.left_on is not None:
n = len(self.left_on)
@@ -578,8 +578,10 @@ def _factorize_keys(lk, rk, sort=True):
llab, rlab = _sort_labels(uniques, llab, rlab)
# NA group
- lmask = llab == -1; lany = lmask.any()
- rmask = rlab == -1; rany = rmask.any()
+ lmask = llab == -1
+ lany = lmask.any()
+ rmask = rlab == -1
+ rany = rmask.any()
if lany or rany:
if lany:
@@ -701,7 +703,7 @@ def _merge_blocks(self, merge_chunks):
sofar = 0
for unit, blk in merge_chunks:
- out_chunk = out[sofar : sofar + len(blk)]
+ out_chunk = out[sofar: sofar + len(blk)]
if unit.indexer is None:
# is this really faster than assigning to arr.flat?
@@ -1038,7 +1040,7 @@ def _concat_blocks(self, blocks):
return make_block(concat_values, blocks[0].items, self.new_axes[0])
else:
offsets = np.r_[0, np.cumsum([len(x._data.axes[0]) for
- x in self.objs])]
+ x in self.objs])]
indexer = np.concatenate([offsets[i] + b.ref_locs
for i, b in enumerate(blocks)
if b is not None])
@@ -1176,7 +1178,7 @@ def _concat_indexes(indexes):
def _make_concat_multiindex(indexes, keys, levels=None, names=None):
if ((levels is None and isinstance(keys[0], tuple)) or
- (levels is not None and len(levels) > 1)):
+ (levels is not None and len(levels) > 1)):
zipped = zip(*keys)
if names is None:
names = [None] * len(zipped)
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 9fd61f7439c02..54258edd09cb4 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -19,10 +19,11 @@
try: # mpl optional
import pandas.tseries.converter as conv
- conv.register() # needs to override so set_xlim works with str/number
+ conv.register() # needs to override so set_xlim works with str/number
except ImportError:
pass
+
def _get_standard_kind(kind):
return {'density': 'kde'}.get(kind, kind)
@@ -35,8 +36,8 @@ class _Options(dict):
format that makes it easy to breakdown into groups later
"""
- #alias so the names are same as plotting method parameter names
- _ALIASES = {'x_compat' : 'xaxis.compat'}
+ # alias so the names are same as plotting method parameter names
+ _ALIASES = {'x_compat': 'xaxis.compat'}
_DEFAULT_KEYS = ['xaxis.compat']
def __init__(self):
@@ -91,6 +92,7 @@ def use(self, key, value):
plot_params = _Options()
+
def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
diagonal='hist', marker='.', **kwds):
"""
@@ -147,7 +149,7 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
common = (mask[a] & mask[b]).values
ax.scatter(df[b][common], df[a][common],
- marker=marker, alpha=alpha, **kwds)
+ marker=marker, alpha=alpha, **kwds)
ax.set_xlabel('')
ax.set_ylabel('')
@@ -321,7 +323,7 @@ def f(x):
harmonic = 1.0
for x_even, x_odd in zip(amplitudes[1::2], amplitudes[2::2]):
result += (x_even * sin(harmonic * x) +
- x_odd * cos(harmonic * x))
+ x_odd * cos(harmonic * x))
harmonic += 1.0
if len(amplitudes) % 2 != 0:
result += amplitudes[-1] * sin(harmonic * x)
@@ -337,7 +339,7 @@ def random_color(column):
columns = [data[col] for col in data.columns if (col != class_column)]
x = [-pi + 2.0 * pi * (t / float(samples)) for t in range(samples)]
used_legends = set([])
- if ax == None:
+ if ax is None:
ax = plt.gca(xlim=(-pi, pi))
for i in range(n):
row = [columns[c][i] for c in range(len(columns))]
@@ -381,7 +383,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
medians = np.array([np.median(sampling) for sampling in samplings])
midranges = np.array([(min(sampling) + max(sampling)) * 0.5
for sampling in samplings])
- if fig == None:
+ if fig is None:
fig = plt.figure()
x = range(samples)
axes = []
@@ -484,7 +486,7 @@ def random_color(column):
else:
x = range(ncols)
- if ax == None:
+ if ax is None:
ax = plt.gca()
# if user has not specified colors to use, choose at random
@@ -535,7 +537,7 @@ def lag_plot(series, ax=None, **kwds):
data = series.values
y1 = data[:-1]
y2 = data[1:]
- if ax == None:
+ if ax is None:
ax = plt.gca()
ax.set_xlabel("y(t)")
ax.set_ylabel("y(t + 1)")
@@ -558,7 +560,7 @@ def autocorrelation_plot(series, ax=None):
import matplotlib.pyplot as plt
n = len(series)
data = np.asarray(series)
- if ax == None:
+ if ax is None:
ax = plt.gca(xlim=(1, n), ylim=(-1.0, 1.0))
mean = np.mean(data)
c0 = np.sum((data - mean) ** 2) / float(n)
@@ -615,6 +617,7 @@ def plot_group(group, ax):
hspace=0.5, wspace=0.3)
return axes
+
class MPLPlot(object):
"""
Base class for assembling a pandas plot using matplotlib
@@ -691,11 +694,10 @@ def _validate_color_args(self):
if ('color' in self.kwds and
(isinstance(self.data, Series) or
- isinstance(self.data, DataFrame) and len(self.data.columns) ==1 )):
- #support series.plot(color='green')
+ isinstance(self.data, DataFrame) and len(self.data.columns) == 1)):
+ # support series.plot(color='green')
self.kwds['color'] = [self.kwds['color']]
-
def _iter_data(self):
from pandas.core.frame import DataFrame
if isinstance(self.data, (Series, np.ndarray)):
@@ -940,7 +942,7 @@ def on_right(self, i):
return self.secondary_y
if (isinstance(self.data, DataFrame) and
- isinstance(self.secondary_y, (tuple, list, np.ndarray))):
+ isinstance(self.secondary_y, (tuple, list, np.ndarray))):
return self.data.columns[i] in self.secondary_y
def _get_style(self, i, col_name):
@@ -978,7 +980,7 @@ def _make_plot(self):
gkde = gaussian_kde(y)
sample_range = max(y) - min(y)
ind = np.linspace(min(y) - 0.5 * sample_range,
- max(y) + 0.5 * sample_range, 1000)
+ max(y) + 0.5 * sample_range, 1000)
ax.set_ylabel("Density")
y = gkde.evaluate(ind)
@@ -1005,7 +1007,7 @@ def __init__(self, data, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
self.x_compat = plot_params['x_compat']
if 'x_compat' in self.kwds:
- self.x_compat = bool(self.kwds.pop('x_compat'))
+ self.x_compat = bool(self.kwds.pop('x_compat'))
def _index_freq(self):
from pandas.core.frame import DataFrame
@@ -1074,7 +1076,7 @@ def _make_plot(self):
kwds = self.kwds.copy()
self._maybe_add_color(colors, kwds, style, i)
- label = com.pprint_thing(label) # .encode('utf-8')
+ label = com.pprint_thing(label) # .encode('utf-8')
mask = com.isnull(y)
if mask.any():
@@ -1178,7 +1180,7 @@ def _maybe_convert_index(self, data):
# over and over for DataFrames
from pandas.core.frame import DataFrame
if (isinstance(data.index, DatetimeIndex) and
- isinstance(data, DataFrame)):
+ isinstance(data, DataFrame)):
freq = getattr(data.index, 'freq', None)
if freq is None:
@@ -1280,7 +1282,8 @@ def _make_plot(self):
if self.subplots:
ax = self._get_ax(i) # self.axes[i]
- rect = bar_f(ax, self.ax_pos, y, self.bar_width, start=pos_prior, **kwds)
+ rect = bar_f(ax, self.ax_pos, y,
+ self.bar_width, start=pos_prior, **kwds)
ax.set_title(label)
elif self.stacked:
mask = y > 0
@@ -1296,7 +1299,7 @@ def _make_plot(self):
labels.append(label)
if self.legend and not self.subplots:
- patches =[r[0] for r in rects]
+ patches = [r[0] for r in rects]
self.axes[0].legend(patches, labels, loc='best',
title=self.legend_title)
@@ -1323,7 +1326,7 @@ def _post_plot_logic(self):
if name is not None:
ax.set_ylabel(name)
- #if self.subplots and self.legend:
+ # if self.subplots and self.legend:
# self.axes[0].legend(loc='best')
@@ -1442,6 +1445,7 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
else:
return plot_obj.axes[0]
+
def plot_series(series, label=None, kind='line', use_index=True, rot=None,
xticks=None, yticks=None, xlim=None, ylim=None,
ax=None, style=None, grid=None, legend=False, logx=False,
@@ -1556,7 +1560,7 @@ def plot_group(grouped, ax):
else:
ax.set_yticklabels(keys, rotation=rot, fontsize=fontsize)
- if column == None:
+ if column is None:
columns = None
else:
if isinstance(column, (list, tuple)):
@@ -1645,9 +1649,10 @@ def plot_group(group, ax):
return fig
-def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None, xrot=None,
- ylabelsize=None, yrot=None, ax=None,
- sharex=False, sharey=False, **kwds):
+def hist_frame(
+ data, column=None, by=None, grid=True, xlabelsize=None, xrot=None,
+ ylabelsize=None, yrot=None, ax=None,
+ sharex=False, sharey=False, **kwds):
"""
Draw Histogram the DataFrame's series using matplotlib / pylab.
@@ -1726,6 +1731,7 @@ def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None, xrot=None
return axes
+
def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None,
xrot=None, ylabelsize=None, yrot=None, **kwds):
"""
@@ -2068,7 +2074,7 @@ def on_right(i):
# Note off-by-one counting because add_subplot uses the MATLAB 1-based
# convention.
for i in range(1, nplots):
- ax = fig.add_subplot(nrows, ncols, i+1, **subplot_kw)
+ ax = fig.add_subplot(nrows, ncols, i + 1, **subplot_kw)
if on_right(i):
orig_ax = ax
ax = ax.twinx()
@@ -2079,11 +2085,13 @@ def on_right(i):
if sharex and nrows > 1:
for i, ax in enumerate(axarr):
if np.ceil(float(i + 1) / ncols) < nrows: # only last row
- [label.set_visible(False) for label in ax.get_xticklabels()]
+ [label.set_visible(
+ False) for label in ax.get_xticklabels()]
if sharey and ncols > 1:
for i, ax in enumerate(axarr):
if (i % ncols) != 0: # only first column
- [label.set_visible(False) for label in ax.get_yticklabels()]
+ [label.set_visible(
+ False) for label in ax.get_yticklabels()]
if squeeze:
# Reshape the array to have the final desired dimension (nrow,ncol),
@@ -2099,6 +2107,7 @@ def on_right(i):
return fig, axes
+
def _get_xlim(lines):
import pandas.tseries.converter as conv
left, right = np.inf, -np.inf
@@ -2108,6 +2117,7 @@ def _get_xlim(lines):
right = max(_maybe_convert_date(x[-1]), right)
return left, right
+
def _maybe_convert_date(x):
if not com.is_integer(x):
conv_func = conv._dt_to_float_ordinal
diff --git a/pandas/tools/tests/__init__.py b/pandas/tools/tests/__init__.py
index 8b137891791fe..e69de29bb2d1d 100644
--- a/pandas/tools/tests/__init__.py
+++ b/pandas/tools/tests/__init__.py
@@ -1 +0,0 @@
-
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 91c278654e7ef..178fa5f19d8ca 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -34,6 +34,7 @@ def get_test_data(ngroups=NGROUPS, n=N):
random.shuffle(arr)
return arr
+
class TestMerge(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -48,9 +49,9 @@ def setUp(self):
# exclude a couple keys for fun
self.df = self.df[self.df['key2'] > 1]
- self.df2 = DataFrame({'key1' : get_test_data(n=N//5),
- 'key2' : get_test_data(ngroups=NGROUPS//2,
- n=N//5),
+ self.df2 = DataFrame({'key1': get_test_data(n=N // 5),
+ 'key2': get_test_data(ngroups=NGROUPS // 2,
+ n=N // 5),
'value': np.random.randn(N // 5)})
index, data = tm.getMixedTypeDict()
@@ -61,9 +62,9 @@ def setUp(self):
index=data['C'])
self.left = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'e', 'a'],
- 'v1': np.random.randn(7)})
+ 'v1': np.random.randn(7)})
self.right = DataFrame({'v2': np.random.randn(4)},
- index=['d', 'b', 'c', 'a'])
+ index=['d', 'b', 'c', 'a'])
def test_cython_left_outer_join(self):
left = a_([0, 1, 2, 1, 2, 0, 0, 1, 2, 3, 3], dtype=np.int64)
@@ -94,14 +95,14 @@ def test_cython_right_outer_join(self):
right = a_([1, 1, 0, 4, 2, 2, 1], dtype=np.int64)
max_group = 5
- rs, ls = algos.left_outer_join(right, left, max_group)
+ rs, ls = algos.left_outer_join(right, left, max_group)
exp_ls = left.argsort(kind='mergesort')
exp_rs = right.argsort(kind='mergesort')
# 0 1 1 1
exp_li = a_([0, 1, 2, 3, 4, 5, 3, 4, 5, 3, 4, 5,
- # 2 2 4
+ # 2 2 4
6, 7, 8, 6, 7, 8, -1])
exp_ri = a_([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3,
4, 4, 4, 5, 5, 5, 6])
@@ -278,7 +279,7 @@ def test_join_on_series_buglet(self):
df = DataFrame({'a': [1, 1]})
ds = Series([2], index=[1], name='b')
result = df.join(ds, on='a')
- expected = DataFrame({'a' : [1, 1],
+ expected = DataFrame({'a': [1, 1],
'b': [2, 2]}, index=df.index)
tm.assert_frame_equal(result, expected)
@@ -327,22 +328,22 @@ def test_join_empty_bug(self):
def test_join_unconsolidated(self):
# GH #331
- a = DataFrame(randn(30,2), columns=['a','b'])
+ a = DataFrame(randn(30, 2), columns=['a', 'b'])
c = Series(randn(30))
a['c'] = c
- d = DataFrame(randn(30,1), columns=['q'])
+ d = DataFrame(randn(30, 1), columns=['q'])
# it works!
a.join(d)
d.join(a)
def test_join_multiindex(self):
- index1 = MultiIndex.from_arrays([['a','a','a','b','b','b'],
- [1,2,3,1,2,3]],
+ index1 = MultiIndex.from_arrays([['a', 'a', 'a', 'b', 'b', 'b'],
+ [1, 2, 3, 1, 2, 3]],
names=['first', 'second'])
- index2 = MultiIndex.from_arrays([['b','b','b','c','c','c'],
- [1,2,3,1,2,3]],
+ index2 = MultiIndex.from_arrays([['b', 'b', 'b', 'c', 'c', 'c'],
+ [1, 2, 3, 1, 2, 3]],
names=['first', 'second'])
df1 = DataFrame(data=np.random.randn(6), index=index1,
@@ -371,9 +372,9 @@ def test_join_multiindex(self):
def test_join_inner_multiindex(self):
key1 = ['bar', 'bar', 'bar', 'foo', 'foo', 'baz', 'baz', 'qux',
- 'qux', 'snap']
+ 'qux', 'snap']
key2 = ['two', 'one', 'three', 'one', 'two', 'one', 'two', 'two',
- 'three', 'one']
+ 'three', 'one']
data = np.random.randn(len(key1))
data = DataFrame({'key1': key1, 'key2': key2,
@@ -410,9 +411,10 @@ def test_join_inner_multiindex(self):
# _assert_same_contents(expected, expected2.ix[:, expected.columns])
def test_join_hierarchical_mixed(self):
- df = DataFrame([(1,2,3), (4,5,6)], columns = ['a','b','c'])
+ df = DataFrame([(1, 2, 3), (4, 5, 6)], columns=['a', 'b', 'c'])
new_df = df.groupby(['a']).agg({'b': [np.mean, np.sum]})
- other_df = DataFrame([(1,2,3), (7,10,6)], columns = ['a','b','d'])
+ other_df = DataFrame(
+ [(1, 2, 3), (7, 10, 6)], columns=['a', 'b', 'd'])
other_df.set_index('a', inplace=True)
result = merge(new_df, other_df, left_index=True, right_index=True)
@@ -420,8 +422,8 @@ def test_join_hierarchical_mixed(self):
self.assertTrue('b' in result)
def test_join_float64_float32(self):
- a = DataFrame(randn(10,2), columns=['a','b'])
- b = DataFrame(randn(10,1), columns=['c']).astype(np.float32)
+ a = DataFrame(randn(10, 2), columns=['a', 'b'])
+ b = DataFrame(randn(10, 1), columns=['c']).astype(np.float32)
joined = a.join(b)
expected = a.join(b.astype('f8'))
assert_frame_equal(joined, expected)
@@ -432,17 +434,17 @@ def test_join_float64_float32(self):
a = np.random.randint(0, 5, 100)
b = np.random.random(100).astype('Float64')
c = np.random.random(100).astype('Float32')
- df = DataFrame({'a': a, 'b' : b, 'c' : c})
- xpdf = DataFrame({'a': a, 'b' : b, 'c' : c.astype('Float64')})
+ df = DataFrame({'a': a, 'b': b, 'c': c})
+ xpdf = DataFrame({'a': a, 'b': b, 'c': c.astype('Float64')})
s = DataFrame(np.random.random(5).astype('f'), columns=['md'])
rs = df.merge(s, left_on='a', right_index=True)
xp = xpdf.merge(s.astype('f8'), left_on='a', right_index=True)
assert_frame_equal(rs, xp)
def test_join_many_non_unique_index(self):
- df1 = DataFrame({"a": [1,1], "b": [1,1], "c": [10,20]})
- df2 = DataFrame({"a": [1,1], "b": [1,2], "d": [100,200]})
- df3 = DataFrame({"a": [1,1], "b": [1,2], "e": [1000,2000]})
+ df1 = DataFrame({"a": [1, 1], "b": [1, 1], "c": [10, 20]})
+ df2 = DataFrame({"a": [1, 1], "b": [1, 2], "d": [100, 200]})
+ df3 = DataFrame({"a": [1, 1], "b": [1, 2], "e": [1000, 2000]})
idf1 = df1.set_index(["a", "b"])
idf2 = df2.set_index(["a", "b"])
idf3 = df3.set_index(["a", "b"])
@@ -459,9 +461,10 @@ def test_join_many_non_unique_index(self):
assert_frame_equal(result, expected.ix[:, result.columns])
- df1 = DataFrame({"a": [1, 1, 1], "b": [1,1, 1], "c": [10,20, 30]})
- df2 = DataFrame({"a": [1, 1, 1], "b": [1,1, 2], "d": [100,200, 300]})
- df3 = DataFrame({"a": [1, 1, 1], "b": [1,1, 2], "e": [1000,2000, 3000]})
+ df1 = DataFrame({"a": [1, 1, 1], "b": [1, 1, 1], "c": [10, 20, 30]})
+ df2 = DataFrame({"a": [1, 1, 1], "b": [1, 1, 2], "d": [100, 200, 300]})
+ df3 = DataFrame(
+ {"a": [1, 1, 1], "b": [1, 1, 2], "e": [1000, 2000, 3000]})
idf1 = df1.set_index(["a", "b"])
idf2 = df2.set_index(["a", "b"])
idf3 = df3.set_index(["a", "b"])
@@ -478,7 +481,7 @@ def test_merge_index_singlekey_right_vs_left(self):
left = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'e', 'a'],
'v1': np.random.randn(7)})
right = DataFrame({'v2': np.random.randn(4)},
- index=['d', 'b', 'c', 'a'])
+ index=['d', 'b', 'c', 'a'])
merged1 = merge(left, right, left_on='key',
right_index=True, how='left', sort=False)
@@ -496,7 +499,7 @@ def test_merge_index_singlekey_inner(self):
left = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'e', 'a'],
'v1': np.random.randn(7)})
right = DataFrame({'v2': np.random.randn(4)},
- index=['d', 'b', 'c', 'a'])
+ index=['d', 'b', 'c', 'a'])
# inner join
result = merge(left, right, left_on='key', right_index=True,
@@ -532,7 +535,7 @@ def test_merge_different_column_key_names(self):
left = DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
'value': [1, 2, 3, 4]})
right = DataFrame({'rkey': ['foo', 'bar', 'qux', 'foo'],
- 'value' : [5, 6, 7, 8]})
+ 'value': [5, 6, 7, 8]})
merged = left.merge(right, left_on='lkey', right_on='rkey',
how='outer', sort=True)
@@ -545,8 +548,8 @@ def test_merge_different_column_key_names(self):
assert_almost_equal(merged['value_y'], [6, np.nan, 5, 8, 5, 8, 7])
def test_merge_nocopy(self):
- left = DataFrame({'a' : 0, 'b' : 1}, index=range(10))
- right = DataFrame({'c' : 'foo', 'd' : 'bar'}, index=range(10))
+ left = DataFrame({'a': 0, 'b': 1}, index=range(10))
+ right = DataFrame({'c': 'foo', 'd': 'bar'}, index=range(10))
merged = merge(left, right, left_index=True,
right_index=True, copy=False)
@@ -558,15 +561,15 @@ def test_merge_nocopy(self):
self.assert_((right['d'] == 'peekaboo').all())
def test_join_sort(self):
- left = DataFrame({'key' : ['foo', 'bar', 'baz', 'foo'],
- 'value' : [1, 2, 3, 4]})
- right = DataFrame({'value2' : ['a', 'b', 'c']},
+ left = DataFrame({'key': ['foo', 'bar', 'baz', 'foo'],
+ 'value': [1, 2, 3, 4]})
+ right = DataFrame({'value2': ['a', 'b', 'c']},
index=['bar', 'baz', 'foo'])
joined = left.join(right, on='key', sort=True)
- expected = DataFrame({'key' : ['bar', 'baz', 'foo', 'foo'],
- 'value' : [2, 3, 1, 4],
- 'value2' : ['a', 'b', 'c', 'c']},
+ expected = DataFrame({'key': ['bar', 'baz', 'foo', 'foo'],
+ 'value': [2, 3, 1, 4],
+ 'value2': ['a', 'b', 'c', 'c']},
index=[1, 2, 0, 3])
assert_frame_equal(joined, expected)
@@ -577,25 +580,25 @@ def test_join_sort(self):
def test_intelligently_handle_join_key(self):
# #733, be a bit more 1337 about not returning unconsolidated DataFrame
- left = DataFrame({'key' : [1, 1, 2, 2, 3],
- 'value' : range(5)}, columns=['value', 'key'])
- right = DataFrame({'key' : [1, 1, 2, 3, 4, 5],
- 'rvalue' : range(6)})
+ left = DataFrame({'key': [1, 1, 2, 2, 3],
+ 'value': range(5)}, columns=['value', 'key'])
+ right = DataFrame({'key': [1, 1, 2, 3, 4, 5],
+ 'rvalue': range(6)})
joined = merge(left, right, on='key', how='outer')
- expected = DataFrame({'key' : [1, 1, 1, 1, 2, 2, 3, 4, 5.],
- 'value' : np.array([0, 0, 1, 1, 2, 3, 4,
- np.nan, np.nan]),
- 'rvalue' : np.array([0, 1, 0, 1, 2, 2, 3, 4, 5])},
+ expected = DataFrame({'key': [1, 1, 1, 1, 2, 2, 3, 4, 5.],
+ 'value': np.array([0, 0, 1, 1, 2, 3, 4,
+ np.nan, np.nan]),
+ 'rvalue': np.array([0, 1, 0, 1, 2, 2, 3, 4, 5])},
columns=['value', 'key', 'rvalue'])
assert_frame_equal(joined, expected)
self.assert_(joined._data.is_consolidated())
def test_handle_join_key_pass_array(self):
- left = DataFrame({'key' : [1, 1, 2, 2, 3],
- 'value' : range(5)}, columns=['value', 'key'])
- right = DataFrame({'rvalue' : range(6)})
+ left = DataFrame({'key': [1, 1, 2, 2, 3],
+ 'value': range(5)}, columns=['value', 'key'])
+ right = DataFrame({'rvalue': range(6)})
key = np.array([1, 1, 2, 3, 4, 5])
merged = merge(left, right, left_on='key', right_on=key, how='outer')
@@ -605,8 +608,8 @@ def test_handle_join_key_pass_array(self):
self.assert_(merged['key'].notnull().all())
self.assert_(merged2['key'].notnull().all())
- left = DataFrame({'value' : range(5)}, columns=['value'])
- right = DataFrame({'rvalue' : range(6)})
+ left = DataFrame({'value': range(5)}, columns=['value'])
+ right = DataFrame({'rvalue': range(6)})
lkey = np.array([1, 1, 2, 2, 3])
rkey = np.array([1, 1, 2, 3, 4, 5])
@@ -615,7 +618,7 @@ def test_handle_join_key_pass_array(self):
np.array([1, 1, 1, 1, 2, 2, 3, 4, 5])))
left = DataFrame({'value': range(3)})
- right = DataFrame({'rvalue' : range(6)})
+ right = DataFrame({'rvalue': range(6)})
key = np.array([0, 1, 1, 2, 2, 3])
merged = merge(left, right, left_index=True, right_on=key, how='outer')
@@ -669,7 +672,7 @@ def test_merge_non_unique_index_many_to_many(self):
dt3 = datetime(2012, 5, 3)
df1 = DataFrame({'x': ['a', 'b', 'c', 'd']},
index=[dt2, dt2, dt, dt])
- df2 = DataFrame({'y': ['e', 'f', 'g',' h', 'i']},
+ df2 = DataFrame({'y': ['e', 'f', 'g', ' h', 'i']},
index=[dt2, dt2, dt3, dt, dt])
_check_merge(df1, df2)
@@ -688,18 +691,21 @@ def test_merge_nosort(self):
from datetime import datetime
- d = {"var1" : np.random.randint(0, 10, size=10),
- "var2" : np.random.randint(0, 10, size=10),
- "var3" : [datetime(2012, 1, 12), datetime(2011, 2, 4),
- datetime(2010, 2, 3), datetime(2012, 1, 12),
- datetime(2011, 2, 4), datetime(2012, 4, 3),
- datetime(2012, 3, 4), datetime(2008, 5, 1),
- datetime(2010, 2, 3), datetime(2012, 2, 3)]}
+ d = {"var1": np.random.randint(0, 10, size=10),
+ "var2": np.random.randint(0, 10, size=10),
+ "var3": [datetime(2012, 1, 12), datetime(2011, 2, 4),
+ datetime(
+ 2010, 2, 3), datetime(2012, 1, 12),
+ datetime(
+ 2011, 2, 4), datetime(2012, 4, 3),
+ datetime(
+ 2012, 3, 4), datetime(2008, 5, 1),
+ datetime(2010, 2, 3), datetime(2012, 2, 3)]}
df = DataFrame.from_dict(d)
var3 = df.var3.unique()
var3.sort()
- new = DataFrame.from_dict({"var3" : var3,
- "var8" : np.random.random(7)})
+ new = DataFrame.from_dict({"var3": var3,
+ "var8": np.random.random(7)})
result = df.merge(new, on="var3", sort=False)
exp = merge(df, new, on='var3', sort=False)
@@ -707,6 +713,7 @@ def test_merge_nosort(self):
self.assert_((df.var3.unique() == result.var3.unique()).all())
+
def _check_merge(x, y):
for how in ['inner', 'left', 'outer']:
result = x.join(y, how=how)
@@ -717,6 +724,7 @@ def _check_merge(x, y):
assert_frame_equal(result, expected)
+
class TestMergeMulti(unittest.TestCase):
def setUp(self):
@@ -735,8 +743,8 @@ def setUp(self):
'three', 'one']
data = np.random.randn(len(key1))
- self.data = DataFrame({'key1' : key1, 'key2' : key2,
- 'data' : data})
+ self.data = DataFrame({'key1': key1, 'key2': key2,
+ 'data': data})
def test_merge_on_multikey(self):
joined = self.data.join(self.to_join, on=['key1', 'key2'])
@@ -767,23 +775,23 @@ def test_compress_group_combinations(self):
key1 = np.tile(key1, 2)
key2 = key1[::-1]
- df = DataFrame({'key1' : key1, 'key2' : key2,
- 'value1' : np.random.randn(20000)})
+ df = DataFrame({'key1': key1, 'key2': key2,
+ 'value1': np.random.randn(20000)})
- df2 = DataFrame({'key1' : key1[::2], 'key2' : key2[::2],
- 'value2' : np.random.randn(10000)})
+ df2 = DataFrame({'key1': key1[::2], 'key2': key2[::2],
+ 'value2': np.random.randn(10000)})
# just to hit the label compression code path
merged = merge(df, df2, how='outer')
def test_left_join_index_preserve_order(self):
- left = DataFrame({'k1' : [0, 1, 2] * 8,
- 'k2' : ['foo', 'bar'] * 12,
- 'v' : np.arange(24)})
+ left = DataFrame({'k1': [0, 1, 2] * 8,
+ 'k2': ['foo', 'bar'] * 12,
+ 'v': np.arange(24)})
index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
- right = DataFrame({'v2' : [5, 7]}, index=index)
+ right = DataFrame({'v2': [5, 7]}, index=index)
result = left.join(right, on=['k1', 'k2'])
@@ -801,11 +809,11 @@ def test_left_join_index_preserve_order(self):
def test_left_merge_na_buglet(self):
left = DataFrame({'id': list('abcde'), 'v1': randn(5),
- 'v2': randn(5), 'dummy' : list('abcde'),
- 'v3' : randn(5)},
+ 'v2': randn(5), 'dummy': list('abcde'),
+ 'v3': randn(5)},
columns=['id', 'v1', 'v2', 'dummy', 'v3'])
- right = DataFrame({'id' : ['a', 'b', np.nan, np.nan, np.nan],
- 'sv3' : [1.234, 5.678, np.nan, np.nan, np.nan]})
+ right = DataFrame({'id': ['a', 'b', np.nan, np.nan, np.nan],
+ 'sv3': [1.234, 5.678, np.nan, np.nan, np.nan]})
merged = merge(left, right, on='id', how='left')
@@ -894,8 +902,9 @@ def _restrict_to_columns(group, columns, suffix):
return group
+
def _assert_same_contents(join_chunk, source):
- NA_SENTINEL = -1234567 # drop_duplicates not so NA-friendly...
+ NA_SENTINEL = -1234567 # drop_duplicates not so NA-friendly...
jvalues = join_chunk.fillna(NA_SENTINEL).drop_duplicates().values
svalues = source.fillna(NA_SENTINEL).drop_duplicates().values
@@ -904,6 +913,7 @@ def _assert_same_contents(join_chunk, source):
assert(len(rows) == len(source))
assert(all(tuple(row) in rows for row in svalues))
+
def _assert_all_na(join_chunk, source_columns, join_col):
for c in source_columns:
if c in join_col:
@@ -923,6 +933,7 @@ def _join_by_hand(a, b, how='left'):
a_re[col] = s
return a_re.reindex(columns=result_columns)
+
class TestConcatenate(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -958,8 +969,9 @@ def test_append(self):
mixed_appended2 = self.frame[:5].append(self.mixed_frame[5:])
# all equal except 'foo' column
- assert_frame_equal(mixed_appended.reindex(columns=['A', 'B', 'C', 'D']),
- mixed_appended2.reindex(columns=['A', 'B', 'C', 'D']))
+ assert_frame_equal(
+ mixed_appended.reindex(columns=['A', 'B', 'C', 'D']),
+ mixed_appended2.reindex(columns=['A', 'B', 'C', 'D']))
# append empty
empty = DataFrame({})
@@ -985,12 +997,12 @@ def test_append_length0_frame(self):
assert_frame_equal(df5, expected)
def test_append_records(self):
- arr1 = np.zeros((2,),dtype=('i4,f4,a10'))
- arr1[:] = [(1,2.,'Hello'),(2,3.,"World")]
+ arr1 = np.zeros((2,), dtype=('i4,f4,a10'))
+ arr1[:] = [(1, 2., 'Hello'), (2, 3., "World")]
- arr2 = np.zeros((3,),dtype=('i4,f4,a10'))
- arr2[:] = [(3, 4.,'foo'),
- (5, 6.,"bar"),
+ arr2 = np.zeros((3,), dtype=('i4,f4,a10'))
+ arr2[:] = [(3, 4., 'foo'),
+ (5, 6., "bar"),
(7., 8., 'baz')]
df1 = DataFrame(arr1)
@@ -1001,10 +1013,10 @@ def test_append_records(self):
assert_frame_equal(result, expected)
def test_append_different_columns(self):
- df = DataFrame({'bools' : np.random.randn(10) > 0,
- 'ints' : np.random.randint(0, 10, 10),
- 'floats' : np.random.randn(10),
- 'strings' : ['foo', 'bar'] * 5})
+ df = DataFrame({'bools': np.random.randn(10) > 0,
+ 'ints': np.random.randint(0, 10, 10),
+ 'floats': np.random.randn(10),
+ 'strings': ['foo', 'bar'] * 5})
a = df[:5].ix[:, ['bools', 'ints', 'floats']]
b = df[5:].ix[:, ['strings', 'ints', 'floats']]
@@ -1028,10 +1040,10 @@ def test_append_many(self):
def test_append_preserve_index_name(self):
# #980
- df1 = DataFrame(data=None, columns=['A','B','C'])
+ df1 = DataFrame(data=None, columns=['A', 'B', 'C'])
df1 = df1.set_index(['A'])
- df2 = DataFrame(data=[[1,4,7], [2,5,8], [3,6,9]],
- columns=['A','B','C'])
+ df2 = DataFrame(data=[[1, 4, 7], [2, 5, 8], [3, 6, 9]],
+ columns=['A', 'B', 'C'])
df2 = df2.set_index(['A'])
result = df1.append(df2)
@@ -1052,7 +1064,6 @@ def _check_diff_index(df_list, result, exp_index):
expected = reindexed[0].join(reindexed[1:])
tm.assert_frame_equal(result, expected)
-
# different join types
joined = df_list[0].join(df_list[1:], how='outer')
_check_diff_index(df_list, joined, df.index)
@@ -1066,7 +1077,7 @@ def _check_diff_index(df_list, result, exp_index):
self.assertRaises(ValueError, df_list[0].join, df_list[1:], on='a')
def test_join_many_mixed(self):
- df = DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
+ df = DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
df['key'] = ['foo', 'bar'] * 4
df1 = df.ix[:, ['A', 'B']]
df2 = df.ix[:, ['C', 'D']]
@@ -1076,9 +1087,9 @@ def test_join_many_mixed(self):
assert_frame_equal(result, df)
def test_append_missing_column_proper_upcast(self):
- df1 = DataFrame({'A' : np.array([1,2, 3, 4], dtype='i8')})
- df2 = DataFrame({'B' : np.array([True,False, True, False],
- dtype=bool)})
+ df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8')})
+ df2 = DataFrame({'B': np.array([True, False, True, False],
+ dtype=bool)})
appended = df1.append(df2, ignore_index=True)
self.assert_(appended['A'].dtype == 'f8')
@@ -1132,10 +1143,10 @@ def test_concat_keys_specific_levels(self):
self.assertEqual(result.columns.names[0], 'group_key')
def test_concat_dataframe_keys_bug(self):
- t1 = DataFrame({'value': Series([1,2,3],
+ t1 = DataFrame({'value': Series([1, 2, 3],
index=Index(['a', 'b', 'c'], name='id'))})
t2 = DataFrame({'value': Series([7, 8],
- index=Index(['a', 'b'], name = 'id'))})
+ index=Index(['a', 'b'], name='id'))})
# it works
result = concat([t1, t2], axis=1, keys=['t1', 't2'])
@@ -1143,10 +1154,10 @@ def test_concat_dataframe_keys_bug(self):
('t2', 'value')])
def test_concat_dict(self):
- frames = {'foo' : DataFrame(np.random.randn(4, 3)),
- 'bar' : DataFrame(np.random.randn(4, 3)),
- 'baz' : DataFrame(np.random.randn(4, 3)),
- 'qux' : DataFrame(np.random.randn(4, 3))}
+ frames = {'foo': DataFrame(np.random.randn(4, 3)),
+ 'bar': DataFrame(np.random.randn(4, 3)),
+ 'baz': DataFrame(np.random.randn(4, 3)),
+ 'qux': DataFrame(np.random.randn(4, 3))}
sorted_keys = sorted(frames)
@@ -1166,7 +1177,7 @@ def test_concat_dict(self):
def test_concat_ignore_index(self):
frame1 = DataFrame({"test1": ["a", "b", "c"],
- "test2": [1,2,3],
+ "test2": [1, 2, 3],
"test3": [4.5, 3.2, 1.2]})
frame2 = DataFrame({"test3": [5.2, 2.2, 4.3]})
frame1.index = Index(["x", "y", "z"])
@@ -1175,7 +1186,7 @@ def test_concat_ignore_index(self):
v1 = concat([frame1, frame2], axis=1, ignore_index=True)
nan = np.nan
- expected = DataFrame([[nan,nan,nan, 4.3],
+ expected = DataFrame([[nan, nan, nan, 4.3],
['a', 1, 4.5, 5.2],
['b', 2, 3.2, 2.2],
['c', 3, 1.2, nan]],
@@ -1246,10 +1257,10 @@ def test_concat_keys_levels_no_overlap(self):
keys=['one', 'two'], levels=[['foo', 'bar', 'baz']])
def test_concat_rename_index(self):
- a = DataFrame(np.random.rand(3,3),
+ a = DataFrame(np.random.rand(3, 3),
columns=list('ABC'),
index=Index(list('abc'), name='index_a'))
- b = DataFrame(np.random.rand(3,3),
+ b = DataFrame(np.random.rand(3, 3),
columns=list('ABC'),
index=Index(list('abc'), name='index_b'))
@@ -1264,16 +1275,16 @@ def test_concat_rename_index(self):
def test_crossed_dtypes_weird_corner(self):
columns = ['A', 'B', 'C', 'D']
- df1 = DataFrame({'A' : np.array([1, 2, 3, 4], dtype='f8'),
- 'B' : np.array([1, 2, 3, 4], dtype='i8'),
- 'C' : np.array([1, 2, 3, 4], dtype='f8'),
- 'D' : np.array([1, 2, 3, 4], dtype='i8')},
+ df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='f8'),
+ 'B': np.array([1, 2, 3, 4], dtype='i8'),
+ 'C': np.array([1, 2, 3, 4], dtype='f8'),
+ 'D': np.array([1, 2, 3, 4], dtype='i8')},
columns=columns)
- df2 = DataFrame({'A' : np.array([1, 2, 3, 4], dtype='i8'),
- 'B' : np.array([1, 2, 3, 4], dtype='f8'),
- 'C' : np.array([1, 2, 3, 4], dtype='i8'),
- 'D' : np.array([1, 2, 3, 4], dtype='f8')},
+ df2 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8'),
+ 'B': np.array([1, 2, 3, 4], dtype='f8'),
+ 'C': np.array([1, 2, 3, 4], dtype='i8'),
+ 'D': np.array([1, 2, 3, 4], dtype='f8')},
columns=columns)
appended = df1.append(df2, ignore_index=True)
@@ -1283,7 +1294,8 @@ def test_crossed_dtypes_weird_corner(self):
df = DataFrame(np.random.randn(1, 3), index=['a'])
df2 = DataFrame(np.random.randn(1, 4), index=['b'])
- result = concat([df, df2], keys=['one', 'two'], names=['first', 'second'])
+ result = concat(
+ [df, df2], keys=['one', 'two'], names=['first', 'second'])
self.assertEqual(result.index.names, ['first', 'second'])
def test_handle_empty_objects(self):
@@ -1411,25 +1423,26 @@ def test_panel_concat_buglet(self):
# #2257
def make_panel():
index = 5
- cols = 3
+ cols = 3
+
def df():
- return DataFrame(np.random.randn(index,cols),
- index = [ "I%s" % i for i in range(index) ],
- columns = [ "C%s" % i for i in range(cols) ])
- return Panel(dict([("Item%s" % x, df()) for x in ['A','B','C']]))
+ return DataFrame(np.random.randn(index, cols),
+ index=["I%s" % i for i in range(index)],
+ columns=["C%s" % i for i in range(cols)])
+ return Panel(dict([("Item%s" % x, df()) for x in ['A', 'B', 'C']]))
panel1 = make_panel()
panel2 = make_panel()
- panel2 = panel2.rename_axis(dict([ (x,"%s_1" % x)
- for x in panel2.major_axis ]),
+ panel2 = panel2.rename_axis(dict([(x, "%s_1" % x)
+ for x in panel2.major_axis]),
axis=1)
panel3 = panel2.rename_axis(lambda x: '%s_1' % x, axis=1)
panel3 = panel3.rename_axis(lambda x: '%s_1' % x, axis=2)
# it works!
- concat([ panel1, panel3 ], axis = 1, verify_integrity = True)
+ concat([panel1, panel3], axis=1, verify_integrity=True)
def test_panel4d_concat(self):
p4d = tm.makePanel4D()
@@ -1561,11 +1574,12 @@ def test_concat_bug_1719(self):
## to join with union
## these two are of different length!
- left = concat([ts1,ts2], join='outer', axis = 1)
- right = concat([ts2,ts1], join='outer', axis = 1)
+ left = concat([ts1, ts2], join='outer', axis=1)
+ right = concat([ts2, ts1], join='outer', axis=1)
self.assertEqual(len(left), len(right))
+
class TestOrderedMerge(unittest.TestCase):
def setUp(self):
@@ -1586,7 +1600,8 @@ def test_basic(self):
assert_frame_equal(result, expected)
def test_ffill(self):
- result = ordered_merge(self.left, self.right, on='key', fill_method='ffill')
+ result = ordered_merge(
+ self.left, self.right, on='key', fill_method='ffill')
expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'],
'lvalue': [1., 1, 2, 2, 3, 3.],
'rvalue': [nan, 1, 2, 3, 3, 4]})
@@ -1617,5 +1632,5 @@ def test_multigroup(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index 133f6757f0a8d..c5cf483fb1450 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -7,27 +7,28 @@
from pandas.tools.pivot import pivot_table, crosstab
import pandas.util.testing as tm
+
class TestPivotTable(unittest.TestCase):
_multiprocess_can_split_ = True
def setUp(self):
- self.data = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
- 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B' : ['one', 'one', 'one', 'two',
- 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C' : ['dull', 'dull', 'shiny', 'dull',
- 'dull', 'shiny', 'shiny', 'dull',
- 'shiny', 'shiny', 'shiny'],
- 'D' : np.random.randn(11),
- 'E' : np.random.randn(11),
- 'F' : np.random.randn(11)})
+ self.data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
+ 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two',
+ 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull',
+ 'dull', 'shiny', 'shiny', 'dull',
+ 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
def test_pivot_table(self):
rows = ['A', 'B']
- cols= 'C'
+ cols = 'C'
table = pivot_table(self.data, values='D', rows=rows, cols=cols)
table2 = self.data.pivot_table(values='D', rows=rows, cols=cols)
@@ -63,7 +64,7 @@ def test_pass_function(self):
def test_pivot_table_multiple(self):
rows = ['A', 'B']
- cols= 'C'
+ cols = 'C'
table = pivot_table(self.data, rows=rows, cols=cols)
expected = self.data.groupby(rows + [cols]).agg(np.mean).unstack()
tm.assert_frame_equal(table, expected)
@@ -136,7 +137,7 @@ def _check_output(res, col, rows=['A', 'B'], cols=['C']):
# no rows
rtable = self.data.pivot_table(cols=['AA', 'BB'], margins=True,
- aggfunc=np.mean)
+ aggfunc=np.mean)
self.assert_(isinstance(rtable, Series))
for item in ['DD', 'EE', 'FF']:
gmarg = table[item]['All', '']
@@ -150,12 +151,12 @@ def test_pivot_integer_columns(self):
d = datetime.date.min
data = list(product(['foo', 'bar'], ['A', 'B', 'C'], ['x1', 'x2'],
- [d + datetime.timedelta(i) for i in xrange(20)], [1.0]))
+ [d + datetime.timedelta(i) for i in xrange(20)], [1.0]))
df = pandas.DataFrame(data)
- table = df.pivot_table(values=4, rows=[0,1,3],cols=[2])
+ table = df.pivot_table(values=4, rows=[0, 1, 3], cols=[2])
df2 = df.rename(columns=str)
- table2 = df2.pivot_table(values='4', rows=['0','1','3'], cols=['2'])
+ table2 = df2.pivot_table(values='4', rows=['0', '1', '3'], cols=['2'])
tm.assert_frame_equal(table, table2)
@@ -221,21 +222,22 @@ def test_pivot_columns_lexsorted(self):
self.assert_(pivoted.columns.is_monotonic)
+
class TestCrosstab(unittest.TestCase):
def setUp(self):
- df = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
- 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B' : ['one', 'one', 'one', 'two',
- 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C' : ['dull', 'dull', 'shiny', 'dull',
- 'dull', 'shiny', 'shiny', 'dull',
- 'shiny', 'shiny', 'shiny'],
- 'D' : np.random.randn(11),
- 'E' : np.random.randn(11),
- 'F' : np.random.randn(11)})
+ df = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
+ 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two',
+ 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull',
+ 'dull', 'shiny', 'shiny', 'dull',
+ 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
self.df = df.append(df, ignore_index=True)
@@ -250,7 +252,8 @@ def test_crosstab_multiple(self):
result = crosstab(df['A'], [df['B'], df['C']])
expected = df.groupby(['A', 'B', 'C']).size()
- expected = expected.unstack('B').unstack('C').fillna(0).astype(np.int64)
+ expected = expected.unstack(
+ 'B').unstack('C').fillna(0).astype(np.int64)
tm.assert_frame_equal(result, expected)
result = crosstab([df['B'], df['C']], df['A'])
@@ -314,7 +317,7 @@ def test_crosstab_pass_values(self):
table = crosstab([a, b], c, values, aggfunc=np.sum,
rownames=['foo', 'bar'], colnames=['baz'])
- df = DataFrame({'foo': a, 'bar': b, 'baz': c, 'values' : values})
+ df = DataFrame({'foo': a, 'bar': b, 'baz': c, 'values': values})
expected = df.pivot_table('values', rows=['foo', 'bar'], cols='baz',
aggfunc=np.sum)
@@ -322,5 +325,5 @@ def test_crosstab_pass_values(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tools/tests/test_tile.py b/pandas/tools/tests/test_tile.py
index c571f2739f434..f886d545bb395 100644
--- a/pandas/tools/tests/test_tile.py
+++ b/pandas/tools/tests/test_tile.py
@@ -14,6 +14,7 @@
from numpy.testing import assert_equal, assert_almost_equal
+
class TestCut(unittest.TestCase):
def test_simple(self):
@@ -26,7 +27,7 @@ def test_bins(self):
data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1])
result, bins = cut(data, 3, retbins=True)
assert_equal(result.labels, [0, 0, 0, 1, 2, 0])
- assert_almost_equal(bins, [ 0.1905, 3.36666667, 6.53333333, 9.7])
+ assert_almost_equal(bins, [0.1905, 3.36666667, 6.53333333, 9.7])
def test_right(self):
data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
@@ -38,13 +39,13 @@ def test_noright(self):
data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
result, bins = cut(data, 4, right=False, retbins=True)
assert_equal(result.labels, [0, 0, 0, 2, 3, 0, 1])
- assert_almost_equal(bins, [ 0.2, 2.575, 4.95, 7.325, 9.7095])
+ assert_almost_equal(bins, [0.2, 2.575, 4.95, 7.325, 9.7095])
def test_arraylike(self):
data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
result, bins = cut(data, 3, retbins=True)
assert_equal(result.labels, [0, 0, 0, 1, 2, 0])
- assert_almost_equal(bins, [ 0.1905, 3.36666667, 6.53333333, 9.7])
+ assert_almost_equal(bins, [0.1905, 3.36666667, 6.53333333, 9.7])
def test_bins_not_monotonic(self):
data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
@@ -185,7 +186,6 @@ def test_label_formatting(self):
result = tmod._format_label(117.9998, precision=3)
self.assertEquals(result, '118')
-
def test_qcut_binning_issues(self):
# #1978, 1979
path = os.path.join(curpath(), 'cut_data.csv')
@@ -210,13 +210,12 @@ def test_qcut_binning_issues(self):
self.assertTrue(ep < en)
self.assertTrue(ep <= sn)
+
def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
return pth
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
-
-
diff --git a/pandas/tools/tests/test_tools.py b/pandas/tools/tests/test_tools.py
index baaf78cabcd32..b57ff68c97e3d 100644
--- a/pandas/tools/tests/test_tools.py
+++ b/pandas/tools/tests/test_tools.py
@@ -1,21 +1,22 @@
-#import unittest
+# import unittest
from pandas import DataFrame
from pandas.tools.describe import value_range
import numpy as np
+
def test_value_range():
df = DataFrame(np.random.randn(5, 5))
- df.ix[0,2] = -5
- df.ix[2,0] = 5
+ df.ix[0, 2] = -5
+ df.ix[2, 0] = 5
res = value_range(df)
- assert( res['Minimum'] == -5 )
- assert( res['Maximum'] == 5 )
+ assert(res['Minimum'] == -5)
+ assert(res['Maximum'] == 5)
- df.ix[0,1] = np.NaN
+ df.ix[0, 1] = np.NaN
- assert( res['Minimum'] == -5 )
- assert( res['Maximum'] == 5 )
+ assert(res['Minimum'] == -5)
+ assert(res['Maximum'] == 5)
diff --git a/pandas/tools/tile.py b/pandas/tools/tile.py
index c17a51a7cf6e5..96b4db4520556 100644
--- a/pandas/tools/tile.py
+++ b/pandas/tools/tile.py
@@ -62,7 +62,7 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3,
>>> cut(np.ones(5), 4, labels=False)
array([2, 2, 2, 2, 2])
"""
- #NOTE: this binning code is changed a bit from histogram for var(x) == 0
+ # NOTE: this binning code is changed a bit from histogram for var(x) == 0
if not np.iterable(bins):
if np.isscalar(bins) and bins < 1:
raise ValueError("`bins` should be a positive integer.")
@@ -190,6 +190,7 @@ def _bins_to_cuts(x, bins, right=True, labels=None, retbins=False,
return fac, bins
+
def _format_levels(bins, prec, right=True,
include_lowest=False):
fmt = lambda v: _format_label(v, precision=prec)
@@ -209,7 +210,7 @@ def _format_levels(bins, prec, right=True,
levels[0] = '[' + levels[0][1:]
else:
levels = ['[%s, %s)' % (fmt(a), fmt(b))
- for a, b in zip(bins, bins[1:])]
+ for a, b in zip(bins, bins[1:])]
return levels
diff --git a/pandas/tools/util.py b/pandas/tools/util.py
index f10bfe1ea15df..d4c7190b0d782 100644
--- a/pandas/tools/util.py
+++ b/pandas/tools/util.py
@@ -1,5 +1,6 @@
from pandas.core.index import Index
+
def match(needles, haystack):
haystack = Index(haystack)
needles = Index(needles)
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index 42f66b97ee6bf..dc0df89d1ef9c 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -28,6 +28,7 @@ def register():
units.registry[pydt.date] = DatetimeConverter()
units.registry[pydt.time] = TimeConverter()
+
def _to_ordinalf(tm):
tot_sec = (tm.hour * 3600 + tm.minute * 60 + tm.second +
float(tm.microsecond / 1e6))
@@ -51,7 +52,7 @@ class TimeConverter(units.ConversionInterface):
def convert(value, unit, axis):
valid_types = (str, pydt.time)
if (isinstance(value, valid_types) or com.is_integer(value) or
- com.is_float(value)):
+ com.is_float(value)):
return time2num(value)
if isinstance(value, Index):
return value.map(time2num)
@@ -106,7 +107,7 @@ def convert(values, units, axis):
raise TypeError('Axis must have `freq` set to convert to Periods')
valid_types = (str, datetime, Period, pydt.date, pydt.time)
if (isinstance(values, valid_types) or com.is_integer(values) or
- com.is_float(values)):
+ com.is_float(values)):
return get_datevalue(values, axis.freq)
if isinstance(values, Index):
return values.map(lambda x: get_datevalue(x, axis.freq))
@@ -207,12 +208,12 @@ def __init__(self, locator, tz=None, defaultfmt='%Y-%m-%d'):
if self._tz is dates.UTC:
self._tz._utcoffset = self._tz.utcoffset(None)
self.scaled = {
- 365.0: '%Y',
- 30.: '%b %Y',
- 1.0: '%b %d %Y',
- 1. / 24.: '%H:%M:%S',
- 1. / 24. / 3600. / 1000.: '%H:%M:%S.%f'
- }
+ 365.0: '%Y',
+ 30.: '%b %Y',
+ 1.0: '%b %d %Y',
+ 1. / 24.: '%H:%M:%S',
+ 1. / 24. / 3600. / 1000.: '%H:%M:%S.%f'
+ }
def _get_fmt(self, x):
@@ -317,7 +318,7 @@ def __call__(self):
raise RuntimeError(('MillisecondLocator estimated to generate %d '
'ticks from %s to %s: exceeds Locator.MAXTICKS'
'* 2 (%d) ') %
- (estimate, dmin, dmax, self.MAXTICKS * 2))
+ (estimate, dmin, dmax, self.MAXTICKS * 2))
freq = '%dL' % self._get_interval()
tz = self.tz.tzname(None)
@@ -329,7 +330,7 @@ def __call__(self):
if len(all_dates) > 0:
locs = self.raise_if_exceeds(dates.date2num(all_dates))
return locs
- except Exception, e: #pragma: no cover
+ except Exception, e: # pragma: no cover
pass
lims = dates.date2num([dmin, dmax])
@@ -497,7 +498,7 @@ def _daily_finder(vmin, vmax, freq):
def first_label(label_flags):
if (label_flags[0] == 0) and (label_flags.size > 1) and \
- ((vmin_orig % 1) > 0.0):
+ ((vmin_orig % 1) > 0.0):
return label_flags[1]
else:
return label_flags[0]
@@ -542,26 +543,43 @@ def _second_finder(label_interval):
info['min'][second_start & (_second % label_interval == 0)] = True
year_start = period_break(dates_, 'year')
info_fmt = info['fmt']
- info_fmt[second_start & (_second % label_interval == 0)] = '%H:%M:%S'
+ info_fmt[second_start & (_second %
+ label_interval == 0)] = '%H:%M:%S'
info_fmt[day_start] = '%H:%M:%S\n%d-%b'
info_fmt[year_start] = '%H:%M:%S\n%d-%b\n%Y'
- if span < periodsperday / 12000.0: _second_finder(1)
- elif span < periodsperday / 6000.0: _second_finder(2)
- elif span < periodsperday / 2400.0: _second_finder(5)
- elif span < periodsperday / 1200.0: _second_finder(10)
- elif span < periodsperday / 800.0: _second_finder(15)
- elif span < periodsperday / 400.0: _second_finder(30)
- elif span < periodsperday / 150.0: _minute_finder(1)
- elif span < periodsperday / 70.0: _minute_finder(2)
- elif span < periodsperday / 24.0: _minute_finder(5)
- elif span < periodsperday / 12.0: _minute_finder(15)
- elif span < periodsperday / 6.0: _minute_finder(30)
- elif span < periodsperday / 2.5: _hour_finder(1, False)
- elif span < periodsperday / 1.5: _hour_finder(2, False)
- elif span < periodsperday * 1.25: _hour_finder(3, False)
- elif span < periodsperday * 2.5: _hour_finder(6, True)
- elif span < periodsperday * 4: _hour_finder(12, True)
+ if span < periodsperday / 12000.0:
+ _second_finder(1)
+ elif span < periodsperday / 6000.0:
+ _second_finder(2)
+ elif span < periodsperday / 2400.0:
+ _second_finder(5)
+ elif span < periodsperday / 1200.0:
+ _second_finder(10)
+ elif span < periodsperday / 800.0:
+ _second_finder(15)
+ elif span < periodsperday / 400.0:
+ _second_finder(30)
+ elif span < periodsperday / 150.0:
+ _minute_finder(1)
+ elif span < periodsperday / 70.0:
+ _minute_finder(2)
+ elif span < periodsperday / 24.0:
+ _minute_finder(5)
+ elif span < periodsperday / 12.0:
+ _minute_finder(15)
+ elif span < periodsperday / 6.0:
+ _minute_finder(30)
+ elif span < periodsperday / 2.5:
+ _hour_finder(1, False)
+ elif span < periodsperday / 1.5:
+ _hour_finder(2, False)
+ elif span < periodsperday * 1.25:
+ _hour_finder(3, False)
+ elif span < periodsperday * 2.5:
+ _hour_finder(6, True)
+ elif span < periodsperday * 4:
+ _hour_finder(12, True)
else:
info_maj[month_start] = True
info_min[day_start] = True
@@ -887,6 +905,8 @@ def autoscale(self):
#####-------------------------------------------------------------------------
#---- --- Formatter ---
#####-------------------------------------------------------------------------
+
+
class TimeSeries_DateFormatter(Formatter):
"""
Formats the ticks along an axis controlled by a :class:`PeriodIndex`.
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 299e93ddfe74d..3bf29af8581a9 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -23,6 +23,7 @@ class FreqGroup(object):
FR_MIN = 8000
FR_SEC = 9000
+
class Resolution(object):
RESO_US = 0
@@ -33,15 +34,17 @@ class Resolution(object):
@classmethod
def get_str(cls, reso):
- return {RESO_US : 'microsecond',
- RESO_SEC : 'second',
- RESO_MIN : 'minute',
- RESO_HR : 'hour',
- RESO_DAY : 'day'}.get(reso, 'day')
+ return {RESO_US: 'microsecond',
+ RESO_SEC: 'second',
+ RESO_MIN: 'minute',
+ RESO_HR: 'hour',
+ RESO_DAY: 'day'}.get(reso, 'day')
+
def get_reso_string(reso):
return Resolution.get_str(reso)
+
def get_to_timestamp_base(base):
if base <= FreqGroup.FR_WK:
return FreqGroup.FR_DAY
@@ -78,11 +81,11 @@ def get_freq_code(freqstr):
if isinstance(freqstr, tuple):
if (com.is_integer(freqstr[0]) and
- com.is_integer(freqstr[1])):
- #e.g., freqstr = (2000, 1)
+ com.is_integer(freqstr[1])):
+ # e.g., freqstr = (2000, 1)
return freqstr
else:
- #e.g., freqstr = ('T', 5)
+ # e.g., freqstr = ('T', 5)
try:
code = _period_str_to_code(freqstr[0])
stride = freqstr[1]
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 1ff675bbed83a..891d7a1b52d07 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -8,8 +8,9 @@
from pandas.core.common import isnull
from pandas.core.index import Index, Int64Index
-from pandas.tseries.frequencies import (infer_freq, to_offset, get_period_alias,
- Resolution, get_reso_string)
+from pandas.tseries.frequencies import (
+ infer_freq, to_offset, get_period_alias,
+ Resolution, get_reso_string)
from pandas.tseries.offsets import DateOffset, generate_range, Tick
from pandas.tseries.tools import parse_time_string, normalize_date
from pandas.util.decorators import cache_readonly
@@ -256,7 +257,7 @@ def __new__(cls, data=None,
tz = tools._maybe_get_tz(tz)
if (not isinstance(data, DatetimeIndex) or
- getattr(data, 'tz', None) is None):
+ getattr(data, 'tz', None) is None):
# Convert tz-naive to UTC
ints = subarr.view('i8')
subarr = tslib.tz_localize_to_utc(ints, tz)
@@ -337,7 +338,7 @@ def _generate(cls, start, end, periods, name, offset,
if (offset._should_cache() and
not (offset._normalize_cache and not _normalized) and
- _naive_in_cache_range(start, end)):
+ _naive_in_cache_range(start, end)):
index = cls._cached_range(start, end, periods=periods,
offset=offset, name=name)
else:
@@ -362,7 +363,7 @@ def _generate(cls, start, end, periods, name, offset,
if (offset._should_cache() and
not (offset._normalize_cache and not _normalized) and
- _naive_in_cache_range(start, end)):
+ _naive_in_cache_range(start, end)):
index = cls._cached_range(start, end, periods=periods,
offset=offset, name=name)
else:
@@ -909,7 +910,7 @@ def _wrap_joined_index(self, joined, other):
name = self.name if self.name == other.name else None
if (isinstance(other, DatetimeIndex)
and self.offset == other.offset
- and self._can_fast_union(other)):
+ and self._can_fast_union(other)):
joined = self._view_like(joined)
joined.name = name
return joined
@@ -1304,7 +1305,7 @@ def equals(self, other):
return True
if (not hasattr(other, 'inferred_type') or
- other.inferred_type != 'datetime64'):
+ other.inferred_type != 'datetime64'):
if self.offset is not None:
return False
try:
@@ -1315,7 +1316,8 @@ def equals(self, other):
if self.tz is not None:
if other.tz is None:
return False
- same_zone = tslib.get_timezone(self.tz) == tslib.get_timezone(other.tz)
+ same_zone = tslib.get_timezone(
+ self.tz) == tslib.get_timezone(other.tz)
else:
if other.tz is not None:
return False
@@ -1645,6 +1647,7 @@ def _time_to_micros(time):
seconds = time.hour * 60 * 60 + 60 * time.minute + time.second
return 1000000 * seconds + time.microsecond
+
def _process_concat_data(to_concat, name):
klass = Index
kwargs = {}
@@ -1684,7 +1687,7 @@ def _process_concat_data(to_concat, name):
to_concat = [x.values for x in to_concat]
klass = DatetimeIndex
- kwargs = {'tz' : tz}
+ kwargs = {'tz': tz}
concat = com._concat_compat
else:
for i, x in enumerate(to_concat):
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index c340ac261592f..24594d1bea9d6 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -116,7 +116,7 @@ def __repr__(self):
attrs = []
for attr in self.__dict__:
if ((attr == 'kwds' and len(self.kwds) == 0)
- or attr.startswith('_')):
+ or attr.startswith('_')):
continue
if attr not in exclude:
attrs.append('='.join((attr, repr(getattr(self, attr)))))
@@ -415,7 +415,8 @@ def apply(self, other):
n = self.n
wkday, days_in_month = tslib.monthrange(other.year, other.month)
- lastBDay = days_in_month - max(((wkday + days_in_month - 1) % 7) - 4, 0)
+ lastBDay = days_in_month - max(((wkday + days_in_month - 1)
+ % 7) - 4, 0)
if n > 0 and not other.day >= lastBDay:
n = n - 1
@@ -630,7 +631,8 @@ def apply(self, other):
n = self.n
wkday, days_in_month = tslib.monthrange(other.year, other.month)
- lastBDay = days_in_month - max(((wkday + days_in_month - 1) % 7) - 4, 0)
+ lastBDay = days_in_month - max(((wkday + days_in_month - 1)
+ % 7) - 4, 0)
monthsToGo = 3 - ((other.month - self.startingMonth) % 3)
if monthsToGo == 3:
@@ -824,11 +826,11 @@ def apply(self, other):
years = n
if n > 0:
if (other.month < self.month or
- (other.month == self.month and other.day < lastBDay)):
+ (other.month == self.month and other.day < lastBDay)):
years -= 1
elif n <= 0:
if (other.month > self.month or
- (other.month == self.month and other.day > lastBDay)):
+ (other.month == self.month and other.day > lastBDay)):
years += 1
other = other + relativedelta(years=years)
@@ -872,11 +874,11 @@ def apply(self, other):
if n > 0: # roll back first for positive n
if (other.month < self.month or
- (other.month == self.month and other.day < first)):
+ (other.month == self.month and other.day < first)):
years -= 1
elif n <= 0: # roll forward
if (other.month > self.month or
- (other.month == self.month and other.day > first)):
+ (other.month == self.month and other.day > first)):
years += 1
# set first bday for result
@@ -928,7 +930,7 @@ def _decrement(date):
def _rollf(date):
if (date.month != self.month or
- date.day < tslib.monthrange(date.year, date.month)[1]):
+ date.day < tslib.monthrange(date.year, date.month)[1]):
date = _increment(date)
return date
@@ -1073,6 +1075,7 @@ def rule_code(self):
def isAnchored(self):
return False
+
def _delta_to_tick(delta):
if delta.microseconds == 0:
if delta.seconds == 0:
@@ -1107,25 +1110,31 @@ class Day(Tick, CacheableOffset):
_inc = timedelta(1)
_rule_base = 'D'
+
class Hour(Tick):
_inc = timedelta(0, 3600)
_rule_base = 'H'
+
class Minute(Tick):
_inc = timedelta(0, 60)
_rule_base = 'T'
+
class Second(Tick):
_inc = timedelta(0, 1)
_rule_base = 'S'
+
class Milli(Tick):
_rule_base = 'L'
+
class Micro(Tick):
_inc = timedelta(microseconds=1)
_rule_base = 'U'
+
class Nano(Tick):
_inc = 1
_rule_base = 'N'
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 5be006ea6e200..75decb91485ca 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -429,6 +429,8 @@ def dt64arr_to_periodarr(data, freq, tz):
return tslib.dt64arr_to_periodarr(data.view('i8'), base, tz)
# --- Period index sketch
+
+
def _period_index_cmp(opname):
"""
Wrap comparison operations to convert datetime-like to datetime64
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index 5fe01161c996c..fff1c33e70e1f 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -25,6 +25,7 @@
#----------------------------------------------------------------------
# Plotting functions and monkey patches
+
def tsplot(series, plotf, **kwargs):
"""
Plots a Series on the given Matplotlib axes or the current axes
@@ -99,7 +100,7 @@ def _maybe_resample(series, ax, freq, plotf, kwargs):
elif frequencies.is_subperiod(freq, ax_freq) or _is_sub(freq, ax_freq):
_upsample_others(ax, freq, plotf, kwargs)
ax_freq = freq
- else: #pragma: no cover
+ else: # pragma: no cover
raise ValueError('Incompatible frequency conversion')
return freq, ax_freq, series
@@ -140,12 +141,13 @@ def _upsample_others(ax, freq, plotf, kwargs):
labels.extend(rlabels)
if (legend is not None and kwargs.get('legend', True) and
- len(lines) > 0):
+ len(lines) > 0):
title = legend.get_title().get_text()
if title == 'None':
title = None
ax.legend(lines, labels, loc='best', title=title)
+
def _replot_ax(ax, freq, plotf, kwargs):
data = getattr(ax, '_plot_data', None)
ax._plot_data = []
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index 39cbb26f4f255..4cc70692ee85f 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -44,7 +44,7 @@ def __init__(self, freq='Min', closed=None, label=None, how='mean',
end_types = set(['M', 'A', 'Q', 'BM', 'BA', 'BQ', 'W'])
rule = self.freq.rule_code
if (rule in end_types or
- ('-' in rule and rule[:rule.find('-')] in end_types)):
+ ('-' in rule and rule[:rule.find('-')] in end_types)):
if closed is None:
closed = 'right'
if label is None:
@@ -133,7 +133,7 @@ def _get_time_bins(self, axis):
# a little hack
trimmed = False
if (len(binner) > 2 and binner[-2] == axis[-1] and
- self.closed == 'right'):
+ self.closed == 'right'):
binner = binner[:-1]
trimmed = True
@@ -224,7 +224,7 @@ def _resample_timestamps(self, obj):
if isinstance(loffset, (DateOffset, timedelta)):
if (isinstance(result.index, DatetimeIndex)
- and len(result.index) > 0):
+ and len(result.index) > 0):
result.index = result.index + loffset
diff --git a/pandas/tseries/tests/test_converter.py b/pandas/tseries/tests/test_converter.py
index dc5eb1cd6cc27..dc5d5cf67995b 100644
--- a/pandas/tseries/tests/test_converter.py
+++ b/pandas/tseries/tests/test_converter.py
@@ -12,8 +12,10 @@
except ImportError:
raise nose.SkipTest
+
def test_timtetonum_accepts_unicode():
- assert(converter.time2num("00:01")==converter.time2num(u"00:01"))
+ assert(converter.time2num("00:01") == converter.time2num(u"00:01"))
+
class TestDateTimeConverter(unittest.TestCase):
@@ -22,9 +24,9 @@ def setUp(self):
self.tc = converter.TimeFormatter(None)
def test_convert_accepts_unicode(self):
- r1 = self.dtc.convert("12:22",None,None)
- r2 = self.dtc.convert(u"12:22",None,None)
- assert(r1==r2), "DatetimeConverter.convert should accept unicode"
+ r1 = self.dtc.convert("12:22", None, None)
+ r2 = self.dtc.convert(u"12:22", None, None)
+ assert(r1 == r2), "DatetimeConverter.convert should accept unicode"
def test_conversion(self):
rs = self.dtc.convert(['2012-1-1'], None, None)[0]
@@ -49,5 +51,5 @@ def test_time_formatter(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py
index 625eadbc140c2..1a844cdb4f77c 100644
--- a/pandas/tseries/tests/test_daterange.py
+++ b/pandas/tseries/tests/test_daterange.py
@@ -14,12 +14,14 @@
import pandas.core.datetools as datetools
+
def eq_gen_range(kwargs, expected):
rng = generate_range(**kwargs)
assert(np.array_equal(list(rng), expected))
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
+
class TestGenRangeGeneration(unittest.TestCase):
def test_generate(self):
rng1 = list(generate_range(START, END, offset=datetools.bday))
@@ -38,10 +40,11 @@ def test_2(self):
datetime(2008, 1, 3)])
def test_3(self):
- eq_gen_range(dict(start = datetime(2008, 1, 5),
- end = datetime(2008, 1, 6)),
+ eq_gen_range(dict(start=datetime(2008, 1, 5),
+ end=datetime(2008, 1, 6)),
[])
+
class TestDateRange(unittest.TestCase):
def setUp(self):
@@ -235,8 +238,8 @@ def test_intersection(self):
def test_intersection_bug(self):
# GH #771
- a = bdate_range('11/30/2011','12/31/2011')
- b = bdate_range('12/10/2011','12/20/2011')
+ a = bdate_range('11/30/2011', '12/31/2011')
+ b = bdate_range('12/10/2011', '12/20/2011')
result = a.intersection(b)
self.assert_(result.equals(b))
@@ -296,9 +299,7 @@ def test_range_bug(self):
self.assert_(np.array_equal(result, DatetimeIndex(exp_values)))
-
-
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 3a96d59d3dd1c..aad831ae48a64 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -16,6 +16,7 @@
import pandas.lib as lib
+
def test_to_offset_multiple():
freqstr = '2h30min'
freqstr2 = '2h 30min'
@@ -53,12 +54,13 @@ def test_to_offset_multiple():
else:
assert(False)
+
def test_to_offset_negative():
freqstr = '-1S'
result = to_offset(freqstr)
assert(result.n == -1)
- freqstr='-5min10s'
+ freqstr = '-5min10s'
result = to_offset(freqstr)
assert(result.n == -310)
@@ -75,6 +77,7 @@ def test_anchored_shortcuts():
_dti = DatetimeIndex
+
class TestFrequencyInference(unittest.TestCase):
def test_raise_if_too_few(self):
@@ -197,7 +200,6 @@ def _check_generated_range(self, start, freq):
(inf_freq == 'Q-OCT' and
gen.freqstr in ('Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN')))
-
gen = date_range(start, periods=5, freq=freq)
index = _dti(gen.values)
if not freq.startswith('Q-'):
@@ -243,6 +245,7 @@ def test_non_datetimeindex(self):
MONTHS = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP',
'OCT', 'NOV', 'DEC']
+
def test_is_superperiod_subperiod():
assert(fmod.is_superperiod(offsets.YearEnd(), offsets.MonthEnd()))
assert(fmod.is_subperiod(offsets.MonthEnd(), offsets.YearEnd()))
@@ -252,5 +255,5 @@ def test_is_superperiod_subperiod():
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index 7a368a26115fc..b95a0bb2eff2c 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -23,25 +23,30 @@
_multiprocess_can_split_ = True
+
def test_monthrange():
import calendar
- for y in range(2000,2013):
- for m in range(1,13):
- assert monthrange(y,m) == calendar.monthrange(y,m)
+ for y in range(2000, 2013):
+ for m in range(1, 13):
+ assert monthrange(y, m) == calendar.monthrange(y, m)
####
## Misc function tests
####
+
+
def test_format():
actual = format(datetime(2008, 1, 15))
assert actual == '20080115'
+
def test_ole2datetime():
actual = ole2datetime(60000)
assert actual == datetime(2064, 4, 8)
assert_raises(Exception, ole2datetime, 60)
+
def test_to_datetime1():
actual = to_datetime(datetime(2008, 1, 15))
assert actual == datetime(2008, 1, 15)
@@ -53,17 +58,19 @@ def test_to_datetime1():
s = 'Month 1, 1999'
assert to_datetime(s) == s
+
def test_normalize_date():
actual = normalize_date(datetime(2007, 10, 1, 1, 12, 5, 10))
assert actual == datetime(2007, 10, 1)
+
def test_to_m8():
valb = datetime(2007, 10, 1)
valu = _to_m8(valb)
assert type(valu) == np.datetime64
- #assert valu == np.datetime64(datetime(2007,10,1))
+ # assert valu == np.datetime64(datetime(2007,10,1))
-#def test_datetime64_box():
+# def test_datetime64_box():
# valu = np.datetime64(datetime(2007,10,1))
# valb = _dt_box(valu)
# assert type(valb) == datetime
@@ -73,8 +80,10 @@ def test_to_m8():
### DateOffset Tests
#####
+
class TestDateOffset(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.d = Timestamp(datetime(2008, 1, 2))
@@ -111,8 +120,10 @@ def test_eq(self):
self.assert_(offset1 != offset2)
self.assert_(not (offset1 == offset2))
+
class TestBusinessDay(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
self.d = datetime(2008, 1, 1)
@@ -158,30 +169,31 @@ def testSub(self):
self.assertRaises(Exception, off.__sub__, self.d)
self.assertEqual(2 * off - off, off)
- self.assertEqual(self.d - self.offset2, self.d + BDay(-2))
+ self.assertEqual(self.d - self.offset2, self.d + BDay(-2))
def testRSub(self):
self.assertEqual(self.d - self.offset2, (-self.offset2).apply(self.d))
def testMult1(self):
- self.assertEqual(self.d + 10*self.offset, self.d + BDay(10))
+ self.assertEqual(self.d + 10 * self.offset, self.d + BDay(10))
def testMult2(self):
- self.assertEqual(self.d + (-5*BDay(-10)),
+ self.assertEqual(self.d + (-5 * BDay(-10)),
self.d + BDay(50))
-
def testRollback1(self):
self.assertEqual(BDay(10).rollback(self.d), self.d)
def testRollback2(self):
- self.assertEqual(BDay(10).rollback(datetime(2008, 1, 5)), datetime(2008, 1, 4))
+ self.assertEqual(
+ BDay(10).rollback(datetime(2008, 1, 5)), datetime(2008, 1, 4))
def testRollforward1(self):
self.assertEqual(BDay(10).rollforward(self.d), self.d)
def testRollforward2(self):
- self.assertEqual(BDay(10).rollforward(datetime(2008, 1, 5)), datetime(2008, 1, 7))
+ self.assertEqual(
+ BDay(10).rollforward(datetime(2008, 1, 5)), datetime(2008, 1, 7))
def test_roll_date_object(self):
offset = BDay()
@@ -218,7 +230,7 @@ def test_apply(self):
datetime(2008, 1, 6): datetime(2008, 1, 7),
datetime(2008, 1, 7): datetime(2008, 1, 8)}))
- tests.append((2*bday,
+ tests.append((2 * bday,
{datetime(2008, 1, 1): datetime(2008, 1, 3),
datetime(2008, 1, 4): datetime(2008, 1, 8),
datetime(2008, 1, 5): datetime(2008, 1, 8),
@@ -233,7 +245,7 @@ def test_apply(self):
datetime(2008, 1, 7): datetime(2008, 1, 4),
datetime(2008, 1, 8): datetime(2008, 1, 7)}))
- tests.append((-2*bday,
+ tests.append((-2 * bday,
{datetime(2008, 1, 1): datetime(2007, 12, 28),
datetime(2008, 1, 4): datetime(2008, 1, 2),
datetime(2008, 1, 5): datetime(2008, 1, 3),
@@ -271,10 +283,12 @@ def test_offsets_compare_equal(self):
offset2 = BDay()
self.assertFalse(offset1 != offset2)
+
def assertOnOffset(offset, date, expected):
actual = offset.onOffset(date)
assert actual == expected
+
class TestWeek(unittest.TestCase):
def test_corner(self):
self.assertRaises(Exception, Week, weekday=7)
@@ -289,28 +303,28 @@ def test_isAnchored(self):
def test_offset(self):
tests = []
- tests.append((Week(), # not business week
+ tests.append((Week(), # not business week
{datetime(2008, 1, 1): datetime(2008, 1, 8),
datetime(2008, 1, 4): datetime(2008, 1, 11),
datetime(2008, 1, 5): datetime(2008, 1, 12),
datetime(2008, 1, 6): datetime(2008, 1, 13),
datetime(2008, 1, 7): datetime(2008, 1, 14)}))
- tests.append((Week(weekday=0), # Mon
+ tests.append((Week(weekday=0), # Mon
{datetime(2007, 12, 31): datetime(2008, 1, 7),
datetime(2008, 1, 4): datetime(2008, 1, 7),
datetime(2008, 1, 5): datetime(2008, 1, 7),
datetime(2008, 1, 6): datetime(2008, 1, 7),
datetime(2008, 1, 7): datetime(2008, 1, 14)}))
- tests.append((Week(0, weekday=0), # n=0 -> roll forward. Mon
+ tests.append((Week(0, weekday=0), # n=0 -> roll forward. Mon
{datetime(2007, 12, 31): datetime(2007, 12, 31),
datetime(2008, 1, 4): datetime(2008, 1, 7),
datetime(2008, 1, 5): datetime(2008, 1, 7),
datetime(2008, 1, 6): datetime(2008, 1, 7),
datetime(2008, 1, 7): datetime(2008, 1, 7)}))
- tests.append((Week(-2, weekday=1), # n=0 -> roll forward. Mon
+ tests.append((Week(-2, weekday=1), # n=0 -> roll forward. Mon
{datetime(2010, 4, 6): datetime(2010, 3, 23),
datetime(2010, 4, 8): datetime(2010, 3, 30),
datetime(2010, 4, 5): datetime(2010, 3, 23)}))
@@ -338,6 +352,7 @@ def test_offsets_compare_equal(self):
offset2 = Week()
self.assertFalse(offset1 != offset2)
+
class TestWeekOfMonth(unittest.TestCase):
def test_constructor(self):
@@ -348,10 +363,10 @@ def test_constructor(self):
self.assertRaises(Exception, WeekOfMonth, n=1, week=0, weekday=7)
def test_offset(self):
- date1 = datetime(2011, 1, 4) # 1st Tuesday of Month
- date2 = datetime(2011, 1, 11) # 2nd Tuesday of Month
- date3 = datetime(2011, 1, 18) # 3rd Tuesday of Month
- date4 = datetime(2011, 1, 25) # 4th Tuesday of Month
+ date1 = datetime(2011, 1, 4) # 1st Tuesday of Month
+ date2 = datetime(2011, 1, 11) # 2nd Tuesday of Month
+ date3 = datetime(2011, 1, 18) # 3rd Tuesday of Month
+ date4 = datetime(2011, 1, 25) # 4th Tuesday of Month
# see for loop for structure
test_cases = [
@@ -413,6 +428,7 @@ def test_onOffset(self):
offset = WeekOfMonth(week=week, weekday=weekday)
self.assert_(offset.onOffset(date) == expected)
+
class TestBMonthBegin(unittest.TestCase):
def test_offset(self):
tests = []
@@ -534,13 +550,14 @@ def test_offsets_compare_equal(self):
offset2 = BMonthEnd()
self.assertFalse(offset1 != offset2)
+
class TestMonthBegin(unittest.TestCase):
def test_offset(self):
tests = []
- #NOTE: I'm not entirely happy with the logic here for Begin -ss
- #see thread 'offset conventions' on the ML
+ # NOTE: I'm not entirely happy with the logic here for Begin -ss
+ # see thread 'offset conventions' on the ML
tests.append((MonthBegin(),
{datetime(2008, 1, 31): datetime(2008, 2, 1),
datetime(2008, 2, 1): datetime(2008, 3, 1),
@@ -573,6 +590,7 @@ def test_offset(self):
for base, expected in cases.iteritems():
assertEq(offset, base, expected)
+
class TestMonthEnd(unittest.TestCase):
def test_offset(self):
@@ -639,6 +657,7 @@ def test_onOffset(self):
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
class TestBQuarterBegin(unittest.TestCase):
def test_isAnchored(self):
@@ -664,7 +683,7 @@ def test_offset(self):
datetime(2007, 7, 1): datetime(2007, 7, 2),
datetime(2007, 4, 1): datetime(2007, 4, 2),
datetime(2007, 4, 2): datetime(2007, 7, 2),
- datetime(2008, 4, 30): datetime(2008, 7, 1),}))
+ datetime(2008, 4, 30): datetime(2008, 7, 1), }))
tests.append((BQuarterBegin(startingMonth=2),
{datetime(2008, 1, 1): datetime(2008, 2, 1),
@@ -677,7 +696,7 @@ def test_offset(self):
datetime(2008, 8, 15): datetime(2008, 11, 3),
datetime(2008, 9, 15): datetime(2008, 11, 3),
datetime(2008, 11, 1): datetime(2008, 11, 3),
- datetime(2008, 4, 30): datetime(2008, 5, 1),}))
+ datetime(2008, 4, 30): datetime(2008, 5, 1), }))
tests.append((BQuarterBegin(startingMonth=1, n=0),
{datetime(2008, 1, 1): datetime(2008, 1, 1),
@@ -691,7 +710,7 @@ def test_offset(self):
datetime(2007, 4, 2): datetime(2007, 4, 2),
datetime(2007, 7, 1): datetime(2007, 7, 2),
datetime(2007, 4, 15): datetime(2007, 7, 2),
- datetime(2007, 7, 2): datetime(2007, 7, 2),}))
+ datetime(2007, 7, 2): datetime(2007, 7, 2), }))
tests.append((BQuarterBegin(startingMonth=1, n=-1),
{datetime(2008, 1, 1): datetime(2007, 10, 1),
@@ -704,7 +723,7 @@ def test_offset(self):
datetime(2007, 7, 3): datetime(2007, 7, 2),
datetime(2007, 4, 3): datetime(2007, 4, 2),
datetime(2007, 7, 2): datetime(2007, 4, 2),
- datetime(2008, 4, 1): datetime(2008, 1, 1),}))
+ datetime(2008, 4, 1): datetime(2008, 1, 1), }))
tests.append((BQuarterBegin(startingMonth=1, n=2),
{datetime(2008, 1, 1): datetime(2008, 7, 1),
@@ -713,7 +732,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 7, 1),
datetime(2007, 3, 31): datetime(2007, 7, 2),
datetime(2007, 4, 15): datetime(2007, 10, 1),
- datetime(2008, 4, 30): datetime(2008, 10, 1),}))
+ datetime(2008, 4, 30): datetime(2008, 10, 1), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -723,6 +742,7 @@ def test_offset(self):
offset = BQuarterBegin(n=-1, startingMonth=1)
self.assertEqual(datetime(2007, 4, 3) + offset, datetime(2007, 4, 2))
+
class TestBQuarterEnd(unittest.TestCase):
def test_isAnchored(self):
@@ -741,7 +761,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 4, 30),
datetime(2008, 3, 31): datetime(2008, 4, 30),
datetime(2008, 4, 15): datetime(2008, 4, 30),
- datetime(2008, 4, 30): datetime(2008, 7, 31),}))
+ datetime(2008, 4, 30): datetime(2008, 7, 31), }))
tests.append((BQuarterEnd(startingMonth=2),
{datetime(2008, 1, 1): datetime(2008, 2, 29),
@@ -751,7 +771,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 5, 30),
datetime(2008, 3, 31): datetime(2008, 5, 30),
datetime(2008, 4, 15): datetime(2008, 5, 30),
- datetime(2008, 4, 30): datetime(2008, 5, 30),}))
+ datetime(2008, 4, 30): datetime(2008, 5, 30), }))
tests.append((BQuarterEnd(startingMonth=1, n=0),
{datetime(2008, 1, 1): datetime(2008, 1, 31),
@@ -761,7 +781,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 4, 30),
datetime(2008, 3, 31): datetime(2008, 4, 30),
datetime(2008, 4, 15): datetime(2008, 4, 30),
- datetime(2008, 4, 30): datetime(2008, 4, 30),}))
+ datetime(2008, 4, 30): datetime(2008, 4, 30), }))
tests.append((BQuarterEnd(startingMonth=1, n=-1),
{datetime(2008, 1, 1): datetime(2007, 10, 31),
@@ -771,7 +791,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 1, 31),
datetime(2008, 3, 31): datetime(2008, 1, 31),
datetime(2008, 4, 15): datetime(2008, 1, 31),
- datetime(2008, 4, 30): datetime(2008, 1, 31),}))
+ datetime(2008, 4, 30): datetime(2008, 1, 31), }))
tests.append((BQuarterEnd(startingMonth=1, n=2),
{datetime(2008, 1, 31): datetime(2008, 7, 31),
@@ -780,7 +800,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 7, 31),
datetime(2008, 3, 31): datetime(2008, 7, 31),
datetime(2008, 4, 15): datetime(2008, 7, 31),
- datetime(2008, 4, 30): datetime(2008, 10, 31),}))
+ datetime(2008, 4, 30): datetime(2008, 10, 31), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -792,40 +812,40 @@ def test_offset(self):
def test_onOffset(self):
- tests = [(BQuarterEnd(1, startingMonth=1), datetime(2008, 1, 31), True),
- (BQuarterEnd(1, startingMonth=1), datetime(2007, 12, 31), False),
- (BQuarterEnd(1, startingMonth=1), datetime(2008, 2, 29), False),
- (BQuarterEnd(1, startingMonth=1), datetime(2007, 3, 30), False),
- (BQuarterEnd(1, startingMonth=1), datetime(2007, 3, 31), False),
- (BQuarterEnd(1, startingMonth=1), datetime(2008, 4, 30), True),
- (BQuarterEnd(1, startingMonth=1), datetime(2008, 5, 30), False),
- (BQuarterEnd(1, startingMonth=1), datetime(2007, 6, 29), False),
- (BQuarterEnd(1, startingMonth=1), datetime(2007, 6, 30), False),
-
- (BQuarterEnd(1, startingMonth=2), datetime(2008, 1, 31), False),
- (BQuarterEnd(1, startingMonth=2), datetime(2007, 12, 31), False),
- (BQuarterEnd(1, startingMonth=2), datetime(2008, 2, 29), True),
- (BQuarterEnd(1, startingMonth=2), datetime(2007, 3, 30), False),
- (BQuarterEnd(1, startingMonth=2), datetime(2007, 3, 31), False),
- (BQuarterEnd(1, startingMonth=2), datetime(2008, 4, 30), False),
- (BQuarterEnd(1, startingMonth=2), datetime(2008, 5, 30), True),
- (BQuarterEnd(1, startingMonth=2), datetime(2007, 6, 29), False),
- (BQuarterEnd(1, startingMonth=2), datetime(2007, 6, 30), False),
-
- (BQuarterEnd(1, startingMonth=3), datetime(2008, 1, 31), False),
- (BQuarterEnd(1, startingMonth=3), datetime(2007, 12, 31), True),
- (BQuarterEnd(1, startingMonth=3), datetime(2008, 2, 29), False),
- (BQuarterEnd(1, startingMonth=3), datetime(2007, 3, 30), True),
- (BQuarterEnd(1, startingMonth=3), datetime(2007, 3, 31), False),
- (BQuarterEnd(1, startingMonth=3), datetime(2008, 4, 30), False),
- (BQuarterEnd(1, startingMonth=3), datetime(2008, 5, 30), False),
- (BQuarterEnd(1, startingMonth=3), datetime(2007, 6, 29), True),
- (BQuarterEnd(1, startingMonth=3), datetime(2007, 6, 30), False),
- ]
+ tests = [
+ (BQuarterEnd(1, startingMonth=1), datetime(2008, 1, 31), True),
+ (BQuarterEnd(1, startingMonth=1), datetime(2007, 12, 31), False),
+ (BQuarterEnd(1, startingMonth=1), datetime(2008, 2, 29), False),
+ (BQuarterEnd(1, startingMonth=1), datetime(2007, 3, 30), False),
+ (BQuarterEnd(1, startingMonth=1), datetime(2007, 3, 31), False),
+ (BQuarterEnd(1, startingMonth=1), datetime(2008, 4, 30), True),
+ (BQuarterEnd(1, startingMonth=1), datetime(2008, 5, 30), False),
+ (BQuarterEnd(1, startingMonth=1), datetime(2007, 6, 29), False),
+ (BQuarterEnd(1, startingMonth=1), datetime(2007, 6, 30), False),
+ (BQuarterEnd(1, startingMonth=2), datetime(2008, 1, 31), False),
+ (BQuarterEnd(1, startingMonth=2), datetime(2007, 12, 31), False),
+ (BQuarterEnd(1, startingMonth=2), datetime(2008, 2, 29), True),
+ (BQuarterEnd(1, startingMonth=2), datetime(2007, 3, 30), False),
+ (BQuarterEnd(1, startingMonth=2), datetime(2007, 3, 31), False),
+ (BQuarterEnd(1, startingMonth=2), datetime(2008, 4, 30), False),
+ (BQuarterEnd(1, startingMonth=2), datetime(2008, 5, 30), True),
+ (BQuarterEnd(1, startingMonth=2), datetime(2007, 6, 29), False),
+ (BQuarterEnd(1, startingMonth=2), datetime(2007, 6, 30), False),
+ (BQuarterEnd(1, startingMonth=3), datetime(2008, 1, 31), False),
+ (BQuarterEnd(1, startingMonth=3), datetime(2007, 12, 31), True),
+ (BQuarterEnd(1, startingMonth=3), datetime(2008, 2, 29), False),
+ (BQuarterEnd(1, startingMonth=3), datetime(2007, 3, 30), True),
+ (BQuarterEnd(1, startingMonth=3), datetime(2007, 3, 31), False),
+ (BQuarterEnd(1, startingMonth=3), datetime(2008, 4, 30), False),
+ (BQuarterEnd(1, startingMonth=3), datetime(2008, 5, 30), False),
+ (BQuarterEnd(1, startingMonth=3), datetime(2007, 6, 29), True),
+ (BQuarterEnd(1, startingMonth=3), datetime(2007, 6, 30), False),
+ ]
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
class TestQuarterBegin(unittest.TestCase):
def test_isAnchored(self):
self.assert_(QuarterBegin(startingMonth=1).isAnchored())
@@ -843,7 +863,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 4, 1),
datetime(2008, 3, 31): datetime(2008, 4, 1),
datetime(2008, 4, 15): datetime(2008, 7, 1),
- datetime(2008, 4, 1): datetime(2008, 7, 1),}))
+ datetime(2008, 4, 1): datetime(2008, 7, 1), }))
tests.append((QuarterBegin(startingMonth=2),
{datetime(2008, 1, 1): datetime(2008, 2, 1),
@@ -853,7 +873,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 5, 1),
datetime(2008, 3, 31): datetime(2008, 5, 1),
datetime(2008, 4, 15): datetime(2008, 5, 1),
- datetime(2008, 4, 30): datetime(2008, 5, 1),}))
+ datetime(2008, 4, 30): datetime(2008, 5, 1), }))
tests.append((QuarterBegin(startingMonth=1, n=0),
{datetime(2008, 1, 1): datetime(2008, 1, 1),
@@ -864,7 +884,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 4, 1),
datetime(2008, 3, 31): datetime(2008, 4, 1),
datetime(2008, 4, 15): datetime(2008, 4, 1),
- datetime(2008, 4, 30): datetime(2008, 4, 1),}))
+ datetime(2008, 4, 30): datetime(2008, 4, 1), }))
tests.append((QuarterBegin(startingMonth=1, n=-1),
{datetime(2008, 1, 1): datetime(2007, 10, 1),
@@ -884,7 +904,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 7, 1),
datetime(2008, 3, 31): datetime(2008, 7, 1),
datetime(2008, 4, 15): datetime(2008, 10, 1),
- datetime(2008, 4, 1): datetime(2008, 10, 1),}))
+ datetime(2008, 4, 1): datetime(2008, 10, 1), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -894,6 +914,7 @@ def test_offset(self):
offset = QuarterBegin(n=-1, startingMonth=1)
self.assertEqual(datetime(2010, 2, 1) + offset, datetime(2010, 1, 1))
+
class TestQuarterEnd(unittest.TestCase):
def test_isAnchored(self):
@@ -912,7 +933,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 4, 30),
datetime(2008, 3, 31): datetime(2008, 4, 30),
datetime(2008, 4, 15): datetime(2008, 4, 30),
- datetime(2008, 4, 30): datetime(2008, 7, 31),}))
+ datetime(2008, 4, 30): datetime(2008, 7, 31), }))
tests.append((QuarterEnd(startingMonth=2),
{datetime(2008, 1, 1): datetime(2008, 2, 29),
@@ -922,7 +943,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 5, 31),
datetime(2008, 3, 31): datetime(2008, 5, 31),
datetime(2008, 4, 15): datetime(2008, 5, 31),
- datetime(2008, 4, 30): datetime(2008, 5, 31),}))
+ datetime(2008, 4, 30): datetime(2008, 5, 31), }))
tests.append((QuarterEnd(startingMonth=1, n=0),
{datetime(2008, 1, 1): datetime(2008, 1, 31),
@@ -932,7 +953,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 4, 30),
datetime(2008, 3, 31): datetime(2008, 4, 30),
datetime(2008, 4, 15): datetime(2008, 4, 30),
- datetime(2008, 4, 30): datetime(2008, 4, 30),}))
+ datetime(2008, 4, 30): datetime(2008, 4, 30), }))
tests.append((QuarterEnd(startingMonth=1, n=-1),
{datetime(2008, 1, 1): datetime(2007, 10, 31),
@@ -952,7 +973,7 @@ def test_offset(self):
datetime(2008, 3, 15): datetime(2008, 7, 31),
datetime(2008, 3, 31): datetime(2008, 7, 31),
datetime(2008, 4, 15): datetime(2008, 7, 31),
- datetime(2008, 4, 30): datetime(2008, 10, 31),}))
+ datetime(2008, 4, 30): datetime(2008, 10, 31), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -965,42 +986,67 @@ def test_offset(self):
def test_onOffset(self):
tests = [(QuarterEnd(1, startingMonth=1), datetime(2008, 1, 31), True),
- (QuarterEnd(1, startingMonth=1), datetime(2007, 12, 31), False),
- (QuarterEnd(1, startingMonth=1), datetime(2008, 2, 29), False),
- (QuarterEnd(1, startingMonth=1), datetime(2007, 3, 30), False),
- (QuarterEnd(1, startingMonth=1), datetime(2007, 3, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2007, 12, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2008, 2, 29), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2007, 3, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2007, 3, 31), False),
(QuarterEnd(1, startingMonth=1), datetime(2008, 4, 30), True),
- (QuarterEnd(1, startingMonth=1), datetime(2008, 5, 30), False),
- (QuarterEnd(1, startingMonth=1), datetime(2008, 5, 31), False),
- (QuarterEnd(1, startingMonth=1), datetime(2007, 6, 29), False),
- (QuarterEnd(1, startingMonth=1), datetime(2007, 6, 30), False),
-
- (QuarterEnd(1, startingMonth=2), datetime(2008, 1, 31), False),
- (QuarterEnd(1, startingMonth=2), datetime(2007, 12, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2008, 5, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2008, 5, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2007, 6, 29), False),
+ (QuarterEnd(
+ 1, startingMonth=1), datetime(2007, 6, 30), False),
+
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2008, 1, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2007, 12, 31), False),
(QuarterEnd(1, startingMonth=2), datetime(2008, 2, 29), True),
- (QuarterEnd(1, startingMonth=2), datetime(2007, 3, 30), False),
- (QuarterEnd(1, startingMonth=2), datetime(2007, 3, 31), False),
- (QuarterEnd(1, startingMonth=2), datetime(2008, 4, 30), False),
- (QuarterEnd(1, startingMonth=2), datetime(2008, 5, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2007, 3, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2007, 3, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2008, 4, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2008, 5, 30), False),
(QuarterEnd(1, startingMonth=2), datetime(2008, 5, 31), True),
- (QuarterEnd(1, startingMonth=2), datetime(2007, 6, 29), False),
- (QuarterEnd(1, startingMonth=2), datetime(2007, 6, 30), False),
-
- (QuarterEnd(1, startingMonth=3), datetime(2008, 1, 31), False),
- (QuarterEnd(1, startingMonth=3), datetime(2007, 12, 31), True),
- (QuarterEnd(1, startingMonth=3), datetime(2008, 2, 29), False),
- (QuarterEnd(1, startingMonth=3), datetime(2007, 3, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2007, 6, 29), False),
+ (QuarterEnd(
+ 1, startingMonth=2), datetime(2007, 6, 30), False),
+
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2008, 1, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2007, 12, 31), True),
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2008, 2, 29), False),
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2007, 3, 30), False),
(QuarterEnd(1, startingMonth=3), datetime(2007, 3, 31), True),
- (QuarterEnd(1, startingMonth=3), datetime(2008, 4, 30), False),
- (QuarterEnd(1, startingMonth=3), datetime(2008, 5, 30), False),
- (QuarterEnd(1, startingMonth=3), datetime(2008, 5, 31), False),
- (QuarterEnd(1, startingMonth=3), datetime(2007, 6, 29), False),
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2008, 4, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2008, 5, 30), False),
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2008, 5, 31), False),
+ (QuarterEnd(
+ 1, startingMonth=3), datetime(2007, 6, 29), False),
(QuarterEnd(1, startingMonth=3), datetime(2007, 6, 30), True),
- ]
+ ]
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
class TestBYearBegin(unittest.TestCase):
def test_misspecified(self):
@@ -1011,22 +1057,22 @@ def test_offset(self):
tests = []
tests.append((BYearBegin(),
- {datetime(2008, 1, 1): datetime(2009, 1, 1),
- datetime(2008, 6, 30): datetime(2009, 1, 1),
- datetime(2008, 12, 31): datetime(2009, 1, 1),
- datetime(2011, 1, 1) : datetime(2011, 1, 3),
- datetime(2011, 1, 3) : datetime(2012, 1, 2),
- datetime(2005, 12, 30) : datetime(2006, 1, 2),
- datetime(2005, 12, 31) : datetime(2006, 1, 2)
- }
- ))
+ {datetime(2008, 1, 1): datetime(2009, 1, 1),
+ datetime(2008, 6, 30): datetime(2009, 1, 1),
+ datetime(2008, 12, 31): datetime(2009, 1, 1),
+ datetime(2011, 1, 1): datetime(2011, 1, 3),
+ datetime(2011, 1, 3): datetime(2012, 1, 2),
+ datetime(2005, 12, 30): datetime(2006, 1, 2),
+ datetime(2005, 12, 31): datetime(2006, 1, 2)
+ }
+ ))
tests.append((BYearBegin(0),
{datetime(2008, 1, 1): datetime(2008, 1, 1),
datetime(2008, 6, 30): datetime(2009, 1, 1),
datetime(2008, 12, 31): datetime(2009, 1, 1),
datetime(2005, 12, 30): datetime(2006, 1, 2),
- datetime(2005, 12, 31): datetime(2006, 1, 2),}))
+ datetime(2005, 12, 31): datetime(2006, 1, 2), }))
tests.append((BYearBegin(-1),
{datetime(2007, 1, 1): datetime(2006, 1, 2),
@@ -1036,12 +1082,12 @@ def test_offset(self):
datetime(2008, 12, 31): datetime(2008, 1, 1),
datetime(2006, 12, 29): datetime(2006, 1, 2),
datetime(2006, 12, 30): datetime(2006, 1, 2),
- datetime(2006, 1, 1): datetime(2005, 1, 3),}))
+ datetime(2006, 1, 1): datetime(2005, 1, 3), }))
tests.append((BYearBegin(-2),
{datetime(2007, 1, 1): datetime(2005, 1, 3),
datetime(2007, 6, 30): datetime(2006, 1, 2),
- datetime(2008, 12, 31): datetime(2007, 1, 1),}))
+ datetime(2008, 12, 31): datetime(2007, 1, 1), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -1061,15 +1107,14 @@ def test_offset(self):
datetime(2008, 6, 30): datetime(2009, 1, 1),
datetime(2008, 12, 31): datetime(2009, 1, 1),
datetime(2005, 12, 30): datetime(2006, 1, 1),
- datetime(2005, 12, 31): datetime(2006, 1, 1),}))
+ datetime(2005, 12, 31): datetime(2006, 1, 1), }))
tests.append((YearBegin(0),
{datetime(2008, 1, 1): datetime(2008, 1, 1),
datetime(2008, 6, 30): datetime(2009, 1, 1),
datetime(2008, 12, 31): datetime(2009, 1, 1),
datetime(2005, 12, 30): datetime(2006, 1, 1),
- datetime(2005, 12, 31): datetime(2006, 1, 1),}))
-
+ datetime(2005, 12, 31): datetime(2006, 1, 1), }))
tests.append((YearBegin(-1),
{datetime(2007, 1, 1): datetime(2006, 1, 1),
@@ -1077,18 +1122,17 @@ def test_offset(self):
datetime(2008, 12, 31): datetime(2008, 1, 1),
datetime(2006, 12, 29): datetime(2006, 1, 1),
datetime(2006, 12, 30): datetime(2006, 1, 1),
- datetime(2007, 1, 1): datetime(2006, 1, 1),}))
+ datetime(2007, 1, 1): datetime(2006, 1, 1), }))
tests.append((YearBegin(-2),
{datetime(2007, 1, 1): datetime(2005, 1, 1),
datetime(2008, 6, 30): datetime(2007, 1, 1),
- datetime(2008, 12, 31): datetime(2007, 1, 1),}))
+ datetime(2008, 12, 31): datetime(2007, 1, 1), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
assertEq(offset, base, expected)
-
def test_onOffset(self):
tests = [
@@ -1101,6 +1145,7 @@ def test_onOffset(self):
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
class TestBYearEndLagged(unittest.TestCase):
def test_bad_month_fail(self):
@@ -1141,6 +1186,7 @@ def test_onOffset(self):
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
class TestBYearEnd(unittest.TestCase):
def test_offset(self):
@@ -1151,13 +1197,13 @@ def test_offset(self):
datetime(2008, 6, 30): datetime(2008, 12, 31),
datetime(2008, 12, 31): datetime(2009, 12, 31),
datetime(2005, 12, 30): datetime(2006, 12, 29),
- datetime(2005, 12, 31): datetime(2006, 12, 29),}))
+ datetime(2005, 12, 31): datetime(2006, 12, 29), }))
tests.append((BYearEnd(0),
{datetime(2008, 1, 1): datetime(2008, 12, 31),
datetime(2008, 6, 30): datetime(2008, 12, 31),
datetime(2008, 12, 31): datetime(2008, 12, 31),
- datetime(2005, 12, 31): datetime(2006, 12, 29),}))
+ datetime(2005, 12, 31): datetime(2006, 12, 29), }))
tests.append((BYearEnd(-1),
{datetime(2007, 1, 1): datetime(2006, 12, 29),
@@ -1165,12 +1211,12 @@ def test_offset(self):
datetime(2008, 12, 31): datetime(2007, 12, 31),
datetime(2006, 12, 29): datetime(2005, 12, 30),
datetime(2006, 12, 30): datetime(2006, 12, 29),
- datetime(2007, 1, 1): datetime(2006, 12, 29),}))
+ datetime(2007, 1, 1): datetime(2006, 12, 29), }))
tests.append((BYearEnd(-2),
{datetime(2007, 1, 1): datetime(2005, 12, 30),
datetime(2008, 6, 30): datetime(2006, 12, 29),
- datetime(2008, 12, 31): datetime(2006, 12, 29),}))
+ datetime(2008, 12, 31): datetime(2006, 12, 29), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -1188,6 +1234,7 @@ def test_onOffset(self):
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
class TestYearEnd(unittest.TestCase):
def test_misspecified(self):
@@ -1201,13 +1248,13 @@ def test_offset(self):
datetime(2008, 6, 30): datetime(2008, 12, 31),
datetime(2008, 12, 31): datetime(2009, 12, 31),
datetime(2005, 12, 30): datetime(2005, 12, 31),
- datetime(2005, 12, 31): datetime(2006, 12, 31),}))
+ datetime(2005, 12, 31): datetime(2006, 12, 31), }))
tests.append((YearEnd(0),
{datetime(2008, 1, 1): datetime(2008, 12, 31),
datetime(2008, 6, 30): datetime(2008, 12, 31),
datetime(2008, 12, 31): datetime(2008, 12, 31),
- datetime(2005, 12, 30): datetime(2005, 12, 31),}))
+ datetime(2005, 12, 30): datetime(2005, 12, 31), }))
tests.append((YearEnd(-1),
{datetime(2007, 1, 1): datetime(2006, 12, 31),
@@ -1215,12 +1262,12 @@ def test_offset(self):
datetime(2008, 12, 31): datetime(2007, 12, 31),
datetime(2006, 12, 29): datetime(2005, 12, 31),
datetime(2006, 12, 30): datetime(2005, 12, 31),
- datetime(2007, 1, 1): datetime(2006, 12, 31),}))
+ datetime(2007, 1, 1): datetime(2006, 12, 31), }))
tests.append((YearEnd(-2),
{datetime(2007, 1, 1): datetime(2005, 12, 31),
datetime(2008, 6, 30): datetime(2006, 12, 31),
- datetime(2008, 12, 31): datetime(2006, 12, 31),}))
+ datetime(2008, 12, 31): datetime(2006, 12, 31), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -1238,6 +1285,7 @@ def test_onOffset(self):
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
class TestYearEndDiffMonth(unittest.TestCase):
def test_offset(self):
@@ -1255,7 +1303,7 @@ def test_offset(self):
{datetime(2008, 1, 1): datetime(2008, 3, 31),
datetime(2008, 2, 28): datetime(2008, 3, 31),
datetime(2008, 3, 31): datetime(2008, 3, 31),
- datetime(2005, 3, 30): datetime(2005, 3, 31),}))
+ datetime(2005, 3, 30): datetime(2005, 3, 31), }))
tests.append((YearEnd(-1, month=3),
{datetime(2007, 1, 1): datetime(2006, 3, 31),
@@ -1263,12 +1311,12 @@ def test_offset(self):
datetime(2008, 3, 31): datetime(2007, 3, 31),
datetime(2006, 3, 29): datetime(2005, 3, 31),
datetime(2006, 3, 30): datetime(2005, 3, 31),
- datetime(2007, 3, 1): datetime(2006, 3, 31),}))
+ datetime(2007, 3, 1): datetime(2006, 3, 31), }))
tests.append((YearEnd(-2, month=3),
{datetime(2007, 1, 1): datetime(2005, 3, 31),
datetime(2008, 6, 30): datetime(2007, 3, 31),
- datetime(2008, 3, 31): datetime(2006, 3, 31),}))
+ datetime(2008, 3, 31): datetime(2006, 3, 31), }))
for offset, cases in tests:
for base, expected in cases.iteritems():
@@ -1286,14 +1334,16 @@ def test_onOffset(self):
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+
def assertEq(offset, base, expected):
actual = offset + base
try:
assert actual == expected
except AssertionError:
raise AssertionError("\nExpected: %s\nActual: %s\nFor Offset: %s)"
- "\nAt Date: %s"%
- (expected, actual, offset, base))
+ "\nAt Date: %s" %
+ (expected, actual, offset, base))
+
def test_Hour():
assertEq(Hour(), datetime(2010, 1, 1), datetime(2010, 1, 1, 1))
@@ -1308,6 +1358,7 @@ def test_Hour():
assert not Hour().isAnchored()
+
def test_Minute():
assertEq(Minute(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 1))
assertEq(Minute(-1), datetime(2010, 1, 1, 0, 1), datetime(2010, 1, 1))
@@ -1320,17 +1371,20 @@ def test_Minute():
assert not Minute().isAnchored()
+
def test_Second():
assertEq(Second(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 1))
assertEq(Second(-1), datetime(2010, 1, 1, 0, 0, 1), datetime(2010, 1, 1))
assertEq(2 * Second(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 2))
- assertEq(-1 * Second(), datetime(2010, 1, 1, 0, 0, 1), datetime(2010, 1, 1))
+ assertEq(
+ -1 * Second(), datetime(2010, 1, 1, 0, 0, 1), datetime(2010, 1, 1))
assert (Second(3) + Second(2)) == Second(5)
assert (Second(3) - Second(2)) == Second()
assert not Second().isAnchored()
+
def test_tick_offset():
assert not Day().isAnchored()
assert not Milli().isAnchored()
@@ -1353,17 +1407,19 @@ def test_compare_ticks():
assert(kls(3) == kls(3))
assert(kls(3) != kls(4))
+
def test_hasOffsetName():
assert hasOffsetName(BDay())
assert not hasOffsetName(BDay(2))
+
def test_get_offset_name():
assert_raises(Exception, get_offset_name, BDay(2))
assert get_offset_name(BDay()) == 'B'
assert get_offset_name(BMonthEnd()) == 'BM'
assert get_offset_name(Week(weekday=0)) == 'W-MON'
- assert get_offset_name(Week(weekday=1)) =='W-TUE'
+ assert get_offset_name(Week(weekday=1)) == 'W-TUE'
assert get_offset_name(Week(weekday=2)) == 'W-WED'
assert get_offset_name(Week(weekday=3)) == 'W-THU'
assert get_offset_name(Week(weekday=4)) == 'W-FRI'
@@ -1383,6 +1439,7 @@ def test_get_offset():
assert get_offset('W-FRI') == Week(weekday=4)
assert get_offset('w@Sat') == Week(weekday=5)
+
def test_parse_time_string():
(date, parsed, reso) = parse_time_string('4Q1984')
(date_lower, parsed_lower, reso_lower) = parse_time_string('4q1984')
@@ -1390,6 +1447,7 @@ def test_parse_time_string():
assert parsed == parsed_lower
assert reso == reso_lower
+
def test_get_standard_freq():
fstr = get_standard_freq('W')
assert fstr == get_standard_freq('w')
@@ -1402,6 +1460,7 @@ def test_get_standard_freq():
assert fstr == get_standard_freq('5QuarTer')
assert fstr == get_standard_freq(('q', 5))
+
def test_quarterly_dont_normalize():
date = datetime(2012, 3, 31, 5, 30)
@@ -1447,17 +1506,20 @@ def test_rule_code(self):
assert alias == _offset_map[alias].rule_code
assert alias == (_offset_map[alias] * 5).rule_code
+
def test_apply_ticks():
result = offsets.Hour(3).apply(offsets.Hour(4))
exp = offsets.Hour(7)
assert(result == exp)
+
def test_delta_to_tick():
delta = timedelta(3)
tick = offsets._delta_to_tick(delta)
assert(tick == offsets.Day(3))
+
def test_dateoffset_misc():
oset = offsets.DateOffset(months=2, days=4)
# it works
@@ -1465,6 +1527,7 @@ def test_dateoffset_misc():
assert(not offsets.DateOffset(months=2) == 2)
+
def test_freq_offsets():
off = BDay(1, offset=timedelta(0, 1800))
assert(off.freqstr == 'B+30Min')
@@ -1474,5 +1537,5 @@ def test_freq_offsets():
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index e3b51033a098c..22264a5613922 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -28,6 +28,7 @@
from pandas.util.testing import assert_series_equal, assert_almost_equal
import pandas.util.testing as tm
+
class TestPeriodProperties(TestCase):
"Test properties such as year, month, weekday, etc...."
#
@@ -305,7 +306,6 @@ def _ex(*args):
xp = _ex(2012, 1, 2)
self.assertEquals(Period('2012', freq='W').end_time, xp)
-
def test_properties_annually(self):
# Test properties on Periods with annually frequency.
a_date = Period(freq='A', year=2007)
@@ -322,7 +322,6 @@ def test_properties_quarterly(self):
assert_equal((qd + x).qyear, 2007)
assert_equal((qd + x).quarter, x + 1)
-
def test_properties_monthly(self):
# Test properties on Periods with daily frequency.
m_date = Period(freq='M', year=2007, month=1)
@@ -339,7 +338,6 @@ def test_properties_monthly(self):
assert_equal(m_ival_x.quarter, 4)
assert_equal(m_ival_x.month, x + 1)
-
def test_properties_weekly(self):
# Test properties on Periods with daily frequency.
w_date = Period(freq='WK', year=2007, month=1, day=7)
@@ -350,7 +348,6 @@ def test_properties_weekly(self):
assert_equal(w_date.week, 1)
assert_equal((w_date - 1).week, 52)
-
def test_properties_daily(self):
# Test properties on Periods with daily frequency.
b_date = Period(freq='B', year=2007, month=1, day=1)
@@ -371,7 +368,6 @@ def test_properties_daily(self):
assert_equal(d_date.weekday, 0)
assert_equal(d_date.dayofyear, 1)
-
def test_properties_hourly(self):
# Test properties on Periods with hourly frequency.
h_date = Period(freq='H', year=2007, month=1, day=1, hour=0)
@@ -385,11 +381,10 @@ def test_properties_hourly(self):
assert_equal(h_date.hour, 0)
#
-
def test_properties_minutely(self):
# Test properties on Periods with minutely frequency.
t_date = Period(freq='Min', year=2007, month=1, day=1, hour=0,
- minute=0)
+ minute=0)
#
assert_equal(t_date.quarter, 1)
assert_equal(t_date.month, 1)
@@ -399,11 +394,10 @@ def test_properties_minutely(self):
assert_equal(t_date.hour, 0)
assert_equal(t_date.minute, 0)
-
def test_properties_secondly(self):
# Test properties on Periods with secondly frequency.
s_date = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
#
assert_equal(s_date.year, 2007)
assert_equal(s_date.quarter, 1)
@@ -460,9 +454,11 @@ def test_comparisons(self):
self.assertEquals(p, p)
self.assert_(not p == 1)
+
def noWrap(item):
return item
+
class TestFreqConversion(TestCase):
"Test frequency conversion of date objects"
@@ -493,17 +489,17 @@ def test_conv_annual(self):
ival_A_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_A_to_D_end = Period(freq='D', year=2007, month=12, day=31)
ival_A_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_A_to_H_end = Period(freq='H', year=2007, month=12, day=31,
- hour=23)
+ hour=23)
ival_A_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_A_to_T_end = Period(freq='Min', year=2007, month=12, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_A_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_A_to_S_end = Period(freq='S', year=2007, month=12, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
ival_AJAN_to_D_end = Period(freq='D', year=2007, month=1, day=31)
ival_AJAN_to_D_start = Period(freq='D', year=2006, month=2, day=1)
@@ -542,7 +538,6 @@ def test_conv_annual(self):
assert_equal(ival_A.asfreq('A'), ival_A)
-
def test_conv_quarterly(self):
# frequency conversion tests: from Quarterly Frequency
@@ -562,17 +557,17 @@ def test_conv_quarterly(self):
ival_Q_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_Q_to_D_end = Period(freq='D', year=2007, month=3, day=31)
ival_Q_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_Q_to_H_end = Period(freq='H', year=2007, month=3, day=31,
- hour=23)
+ hour=23)
ival_Q_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_Q_to_T_end = Period(freq='Min', year=2007, month=3, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_Q_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_Q_to_S_end = Period(freq='S', year=2007, month=3, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
ival_QEJAN_to_D_start = Period(freq='D', year=2006, month=2, day=1)
ival_QEJAN_to_D_end = Period(freq='D', year=2006, month=4, day=30)
@@ -620,17 +615,17 @@ def test_conv_monthly(self):
ival_M_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_M_to_D_end = Period(freq='D', year=2007, month=1, day=31)
ival_M_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_M_to_H_end = Period(freq='H', year=2007, month=1, day=31,
- hour=23)
+ hour=23)
ival_M_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_M_to_T_end = Period(freq='Min', year=2007, month=1, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_M_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
assert_equal(ival_M.asfreq('A'), ival_M_to_A)
assert_equal(ival_M_end_of_year.asfreq('A'), ival_M_to_A)
@@ -652,7 +647,6 @@ def test_conv_monthly(self):
assert_equal(ival_M.asfreq('M'), ival_M)
-
def test_conv_weekly(self):
# frequency conversion tests: from Weekly Frequency
@@ -695,10 +689,10 @@ def test_conv_weekly(self):
if Period(freq='D', year=2007, month=3, day=31).weekday == 6:
ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007,
- quarter=1)
+ quarter=1)
else:
ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007,
- quarter=2)
+ quarter=2)
if Period(freq='D', year=2007, month=1, day=31).weekday == 6:
ival_W_to_M_end_of_month = Period(freq='M', year=2007, month=1)
@@ -710,17 +704,17 @@ def test_conv_weekly(self):
ival_W_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_W_to_D_end = Period(freq='D', year=2007, month=1, day=7)
ival_W_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_W_to_H_end = Period(freq='H', year=2007, month=1, day=7,
- hour=23)
+ hour=23)
ival_W_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_W_to_T_end = Period(freq='Min', year=2007, month=1, day=7,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_W_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
assert_equal(ival_W.asfreq('A'), ival_W_to_A)
assert_equal(ival_W_end_of_year.asfreq('A'),
@@ -762,7 +756,6 @@ def test_conv_weekly(self):
assert_equal(ival_W.asfreq('WK'), ival_W)
-
def test_conv_business(self):
# frequency conversion tests: from Business Frequency"
@@ -778,17 +771,17 @@ def test_conv_business(self):
ival_B_to_W = Period(freq='WK', year=2007, month=1, day=7)
ival_B_to_D = Period(freq='D', year=2007, month=1, day=1)
ival_B_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_B_to_H_end = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_B_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_B_to_T_end = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_B_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
assert_equal(ival_B.asfreq('A'), ival_B_to_A)
assert_equal(ival_B_end_of_year.asfreq('A'), ival_B_to_A)
@@ -810,7 +803,6 @@ def test_conv_business(self):
assert_equal(ival_B.asfreq('B'), ival_B)
-
def test_conv_daily(self):
# frequency conversion tests: from Business Frequency"
@@ -842,17 +834,17 @@ def test_conv_daily(self):
ival_D_to_W = Period(freq='WK', year=2007, month=1, day=7)
ival_D_to_H_start = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_D_to_H_end = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_D_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_D_to_T_end = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_D_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
assert_equal(ival_D.asfreq('A'), ival_D_to_A)
@@ -895,15 +887,15 @@ def test_conv_hourly(self):
ival_H_end_of_year = Period(freq='H', year=2007, month=12, day=31,
hour=23)
ival_H_end_of_quarter = Period(freq='H', year=2007, month=3, day=31,
- hour=23)
+ hour=23)
ival_H_end_of_month = Period(freq='H', year=2007, month=1, day=31,
- hour=23)
+ hour=23)
ival_H_end_of_week = Period(freq='H', year=2007, month=1, day=7,
hour=23)
ival_H_end_of_day = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_H_end_of_bus = Period(freq='H', year=2007, month=1, day=1,
- hour=23)
+ hour=23)
ival_H_to_A = Period(freq='A', year=2007)
ival_H_to_Q = Period(freq='Q', year=2007, quarter=1)
@@ -913,13 +905,13 @@ def test_conv_hourly(self):
ival_H_to_B = Period(freq='B', year=2007, month=1, day=1)
ival_H_to_T_start = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
ival_H_to_T_end = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=59)
+ hour=0, minute=59)
ival_H_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=59, second=59)
+ hour=0, minute=59, second=59)
assert_equal(ival_H.asfreq('A'), ival_H_to_A)
assert_equal(ival_H_end_of_year.asfreq('A'), ival_H_to_A)
@@ -949,15 +941,15 @@ def test_conv_minutely(self):
ival_T_end_of_year = Period(freq='Min', year=2007, month=12, day=31,
hour=23, minute=59)
ival_T_end_of_quarter = Period(freq='Min', year=2007, month=3, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_month = Period(freq='Min', year=2007, month=1, day=31,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_week = Period(freq='Min', year=2007, month=1, day=7,
hour=23, minute=59)
ival_T_end_of_day = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_bus = Period(freq='Min', year=2007, month=1, day=1,
- hour=23, minute=59)
+ hour=23, minute=59)
ival_T_end_of_hour = Period(freq='Min', year=2007, month=1, day=1,
hour=0, minute=59)
@@ -970,9 +962,9 @@ def test_conv_minutely(self):
ival_T_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0)
ival_T_to_S_start = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=0)
+ hour=0, minute=0, second=0)
ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=59)
+ hour=0, minute=0, second=59)
assert_equal(ival_T.asfreq('A'), ival_T_to_A)
assert_equal(ival_T_end_of_year.asfreq('A'), ival_T_to_A)
@@ -1002,19 +994,19 @@ def test_conv_secondly(self):
ival_S_end_of_year = Period(freq='S', year=2007, month=12, day=31,
hour=23, minute=59, second=59)
ival_S_end_of_quarter = Period(freq='S', year=2007, month=3, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
ival_S_end_of_month = Period(freq='S', year=2007, month=1, day=31,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
ival_S_end_of_week = Period(freq='S', year=2007, month=1, day=7,
hour=23, minute=59, second=59)
ival_S_end_of_day = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
ival_S_end_of_bus = Period(freq='S', year=2007, month=1, day=1,
- hour=23, minute=59, second=59)
+ hour=23, minute=59, second=59)
ival_S_end_of_hour = Period(freq='S', year=2007, month=1, day=1,
hour=0, minute=59, second=59)
ival_S_end_of_minute = Period(freq='S', year=2007, month=1, day=1,
- hour=0, minute=0, second=59)
+ hour=0, minute=0, second=59)
ival_S_to_A = Period(freq='A', year=2007)
ival_S_to_Q = Period(freq='Q', year=2007, quarter=1)
@@ -1023,9 +1015,9 @@ def test_conv_secondly(self):
ival_S_to_D = Period(freq='D', year=2007, month=1, day=1)
ival_S_to_B = Period(freq='B', year=2007, month=1, day=1)
ival_S_to_H = Period(freq='H', year=2007, month=1, day=1,
- hour=0)
+ hour=0)
ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1,
- hour=0, minute=0)
+ hour=0, minute=0)
assert_equal(ival_S.asfreq('A'), ival_S_to_A)
assert_equal(ival_S_end_of_year.asfreq('A'), ival_S_to_A)
@@ -1046,6 +1038,7 @@ def test_conv_secondly(self):
assert_equal(ival_S.asfreq('S'), ival_S)
+
class TestPeriodIndex(TestCase):
def __init__(self, *args, **kwds):
TestCase.__init__(self, *args, **kwds)
@@ -1081,8 +1074,9 @@ def test_constructor_field_arrays(self):
expected = period_range('1990Q3', '2009Q2', freq='Q-DEC')
self.assert_(index.equals(expected))
- self.assertRaises(ValueError, PeriodIndex, year=years, quarter=quarters,
- freq='2Q-DEC')
+ self.assertRaises(
+ ValueError, PeriodIndex, year=years, quarter=quarters,
+ freq='2Q-DEC')
index = PeriodIndex(year=years, quarter=quarters)
self.assert_(index.equals(expected))
@@ -1223,7 +1217,8 @@ def test_sub(self):
self.assert_(result.equals(exp))
def test_periods_number_check(self):
- self.assertRaises(ValueError, period_range, '2011-1-1', '2012-1-1', 'B')
+ self.assertRaises(
+ ValueError, period_range, '2011-1-1', '2012-1-1', 'B')
def test_to_timestamp(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
@@ -1238,7 +1233,6 @@ def test_to_timestamp(self):
result = series.to_timestamp(how='start')
self.assert_(result.index.equals(exp_index))
-
def _get_with_delta(delta, freq='A-DEC'):
return date_range(to_datetime('1/1/2001') + delta,
to_datetime('12/31/2009') + delta, freq=freq)
@@ -1476,7 +1470,8 @@ def test_constructor(self):
try:
PeriodIndex(start=start)
- raise AssertionError('Must specify periods if missing start or end')
+ raise AssertionError(
+ 'Must specify periods if missing start or end')
except ValueError:
pass
@@ -1598,7 +1593,7 @@ def test_ts_repr(self):
def test_frame_index_to_string(self):
index = PeriodIndex(['2011-1', '2011-2', '2011-3'], freq='M')
- frame = DataFrame(np.random.randn(3,4), index=index)
+ frame = DataFrame(np.random.randn(3, 4), index=index)
# it works!
frame.to_string()
@@ -1902,7 +1897,7 @@ def test_convert_array_of_periods(self):
def test_with_multi_index(self):
# #1705
- index = date_range('1/1/2012',periods=4,freq='12H')
+ index = date_range('1/1/2012', periods=4, freq='12H')
index_as_arrays = [index.to_period(freq='D'), index.hour]
s = Series([0, 1, 2, 3], index_as_arrays)
@@ -1912,7 +1907,7 @@ def test_with_multi_index(self):
self.assert_(isinstance(s.index.values[0][0], Period))
def test_to_datetime_1703(self):
- index = period_range('1/1/2012',periods=4,freq='D')
+ index = period_range('1/1/2012', periods=4, freq='D')
result = index.to_datetime()
self.assertEquals(result[0], Timestamp('1/1/2012'))
@@ -1929,10 +1924,11 @@ def test_append_concat(self):
s2 = s2.to_period()
# drops index
- result = pd.concat([s1,s2])
+ result = pd.concat([s1, s2])
self.assert_(isinstance(result.index, PeriodIndex))
self.assertEquals(result.index[0], s1.index[0])
+
def _permute(obj):
return obj.take(np.random.permutation(len(obj)))
@@ -1987,7 +1983,7 @@ def _check_freq(self, freq, base_date):
self.assert_(np.array_equal(rng.values, exp))
def test_negone_ordinals(self):
- freqs = ['A', 'M', 'Q', 'D','H', 'T', 'S']
+ freqs = ['A', 'M', 'Q', 'D', 'H', 'T', 'S']
period = Period(ordinal=-1, freq='D')
for freq in freqs:
@@ -2006,5 +2002,5 @@ def test_negone_ordinals(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 4b6ae0b003a95..27fa00f146cb2 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -42,14 +42,14 @@ def setUp(self):
self.period_ser = [Series(np.random.randn(len(x)), x) for x in idx]
self.period_df = [DataFrame(np.random.randn(len(x), 3), index=x,
columns=['A', 'B', 'C'])
- for x in idx]
+ for x in idx]
freq = ['S', 'T', 'H', 'D', 'W', 'M', 'Q-DEC', 'A', '1B30Min']
idx = [date_range('12/31/1999', freq=x, periods=100) for x in freq]
self.datetime_ser = [Series(np.random.randn(len(x)), x) for x in idx]
self.datetime_df = [DataFrame(np.random.randn(len(x), 3), index=x,
- columns=['A', 'B', 'C'])
- for x in idx]
+ columns=['A', 'B', 'C'])
+ for x in idx]
@slow
def test_frame_inferred(self):
@@ -103,11 +103,11 @@ def test_both_style_and_color(self):
plt.close('all')
ts = tm.makeTimeSeries()
- ts.plot(style='b-', color='#000099') #works
+ ts.plot(style='b-', color='#000099') # works
plt.close('all')
s = ts.reset_index(drop=True)
- s.plot(style='b-', color='#000099') #non-tsplot
+ s.plot(style='b-', color='#000099') # non-tsplot
@slow
def test_high_freq(self):
@@ -222,7 +222,7 @@ def test_irreg_hf(self):
diffs = Series(ax.get_lines()[0].get_xydata()[:, 0]).diff()
sec = 1. / 24 / 60 / 60
- self.assert_((np.fabs(diffs[1:] - [sec, sec*2, sec]) < 1e-8).all())
+ self.assert_((np.fabs(diffs[1:] - [sec, sec * 2, sec]) < 1e-8).all())
plt.clf()
fig.add_subplot(111)
@@ -236,7 +236,7 @@ def test_irreg_hf(self):
def test_irregular_datetime64_repr_bug(self):
import matplotlib.pyplot as plt
ser = tm.makeTimeSeries()
- ser = ser[[0,1,2,7]]
+ ser = ser[[0, 1, 2, 7]]
fig = plt.gcf()
plt.clf()
@@ -313,11 +313,11 @@ def _test(ax):
ax = ser.plot()
_test(ax)
- df = DataFrame({'a' : ser, 'b' : ser + 1})
+ df = DataFrame({'a': ser, 'b': ser + 1})
ax = df.plot()
_test(ax)
- df = DataFrame({'a' : ser, 'b' : ser + 1})
+ df = DataFrame({'a': ser, 'b': ser + 1})
axes = df.plot(subplots=True)
[_test(ax) for ax in axes]
@@ -388,7 +388,7 @@ def test_finder_monthly(self):
def test_finder_monthly_long(self):
import matplotlib.pyplot as plt
plt.close('all')
- rng = period_range('1988Q1', periods=24*12, freq='M')
+ rng = period_range('1988Q1', periods=24 * 12, freq='M')
ser = Series(np.random.randn(len(rng)), rng)
ax = ser.plot()
xaxis = ax.get_xaxis()
@@ -705,12 +705,12 @@ def test_from_weekly_resampling(self):
@slow
def test_irreg_dtypes(self):
- #date
+ # date
idx = [date(2000, 1, 1), date(2000, 1, 5), date(2000, 1, 20)]
df = DataFrame(np.random.randn(len(idx), 3), Index(idx, dtype=object))
_check_plot_works(df.plot)
- #np.datetime64
+ # np.datetime64
idx = date_range('1/1/2000', periods=10)
idx = idx[[0, 2, 5, 9]].asobject
df = DataFrame(np.random.randn(len(idx), 3), idx)
@@ -724,8 +724,8 @@ def test_time(self):
t = datetime(1, 1, 1, 3, 30, 0)
deltas = np.random.randint(1, 20, 3).cumsum()
ts = np.array([(t + timedelta(minutes=int(x))).time() for x in deltas])
- df = DataFrame({'a' : np.random.randn(len(ts)),
- 'b' : np.random.randn(len(ts))},
+ df = DataFrame({'a': np.random.randn(len(ts)),
+ 'b': np.random.randn(len(ts))},
index=ts)
ax = df.plot()
@@ -763,8 +763,8 @@ def test_time_musec(self):
deltas = np.random.randint(1, 20, 3).cumsum()
ts = np.array([(t + timedelta(microseconds=int(x))).time()
for x in deltas])
- df = DataFrame({'a' : np.random.randn(len(ts)),
- 'b' : np.random.randn(len(ts))},
+ df = DataFrame({'a': np.random.randn(len(ts)),
+ 'b': np.random.randn(len(ts))},
index=ts)
ax = df.plot()
@@ -802,7 +802,7 @@ def test_secondary_legend(self):
plt.clf()
ax = fig.add_subplot(211)
- #ts
+ # ts
df = tm.makeTimeDataFrame()
ax = df.plot(secondary_y=['A', 'B'])
leg = ax.get_legend()
@@ -843,7 +843,7 @@ def test_secondary_legend(self):
# TODO: color cycle problems
self.assert_(len(colors) == 4)
- #non-ts
+ # non-ts
df = tm.makeDataFrame()
plt.clf()
ax = fig.add_subplot(211)
@@ -915,6 +915,8 @@ def test_mpl_nopandas(self):
line2.get_xydata()[:, 0])
PNG_PATH = 'tmp.png'
+
+
def _check_plot_works(f, freq=None, series=None, *args, **kwargs):
import matplotlib.pyplot as plt
@@ -949,5 +951,5 @@ def _check_plot_works(f, freq=None, series=None, *args, **kwargs):
os.remove(PNG_PATH)
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index af28b2e6b3256..cc45dc36d6537 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -22,6 +22,7 @@
bday = BDay()
+
def _skip_if_no_pytz():
try:
import pytz
@@ -31,18 +32,19 @@ def _skip_if_no_pytz():
class TestResample(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
- dti = DatetimeIndex(start=datetime(2005,1,1),
- end=datetime(2005,1,10), freq='Min')
+ dti = DatetimeIndex(start=datetime(2005, 1, 1),
+ end=datetime(2005, 1, 10), freq='Min')
self.series = Series(np.random.rand(len(dti)), dti)
def test_custom_grouper(self):
- dti = DatetimeIndex(freq='Min', start=datetime(2005,1,1),
- end=datetime(2005,1,10))
+ dti = DatetimeIndex(freq='Min', start=datetime(2005, 1, 1),
+ end=datetime(2005, 1, 10))
- data = np.array([1]*len(dti))
+ data = np.array([1] * len(dti))
s = Series(data, index=dti)
b = TimeGrouper(Minute(5))
@@ -60,7 +62,6 @@ def test_custom_grouper(self):
for f in funcs:
g._cython_agg_general(f)
-
self.assertEquals(g.ngroups, 2593)
self.assert_(notnull(g.mean()).all())
@@ -105,8 +106,9 @@ def test_resample_basic(self):
def test_resample_basic_from_daily(self):
# from daily
- dti = DatetimeIndex(start=datetime(2005,1,1), end=datetime(2005,1,10),
- freq='D', name='index')
+ dti = DatetimeIndex(
+ start=datetime(2005, 1, 1), end=datetime(2005, 1, 10),
+ freq='D', name='index')
s = Series(np.random.rand(len(dti)), dti)
@@ -114,45 +116,45 @@ def test_resample_basic_from_daily(self):
result = s.resample('w-sun', how='last')
self.assertEquals(len(result), 3)
- self.assert_((result.index.dayofweek == [6,6,6]).all())
+ self.assert_((result.index.dayofweek == [6, 6, 6]).all())
self.assertEquals(result.irow(0), s['1/2/2005'])
self.assertEquals(result.irow(1), s['1/9/2005'])
self.assertEquals(result.irow(2), s.irow(-1))
result = s.resample('W-MON', how='last')
self.assertEquals(len(result), 2)
- self.assert_((result.index.dayofweek == [0,0]).all())
+ self.assert_((result.index.dayofweek == [0, 0]).all())
self.assertEquals(result.irow(0), s['1/3/2005'])
self.assertEquals(result.irow(1), s['1/10/2005'])
result = s.resample('W-TUE', how='last')
self.assertEquals(len(result), 2)
- self.assert_((result.index.dayofweek == [1,1]).all())
+ self.assert_((result.index.dayofweek == [1, 1]).all())
self.assertEquals(result.irow(0), s['1/4/2005'])
self.assertEquals(result.irow(1), s['1/10/2005'])
result = s.resample('W-WED', how='last')
self.assertEquals(len(result), 2)
- self.assert_((result.index.dayofweek == [2,2]).all())
+ self.assert_((result.index.dayofweek == [2, 2]).all())
self.assertEquals(result.irow(0), s['1/5/2005'])
self.assertEquals(result.irow(1), s['1/10/2005'])
result = s.resample('W-THU', how='last')
self.assertEquals(len(result), 2)
- self.assert_((result.index.dayofweek == [3,3]).all())
+ self.assert_((result.index.dayofweek == [3, 3]).all())
self.assertEquals(result.irow(0), s['1/6/2005'])
self.assertEquals(result.irow(1), s['1/10/2005'])
result = s.resample('W-FRI', how='last')
self.assertEquals(len(result), 2)
- self.assert_((result.index.dayofweek == [4,4]).all())
+ self.assert_((result.index.dayofweek == [4, 4]).all())
self.assertEquals(result.irow(0), s['1/7/2005'])
self.assertEquals(result.irow(1), s['1/10/2005'])
# to biz day
result = s.resample('B', how='last')
self.assertEquals(len(result), 7)
- self.assert_((result.index.dayofweek == [4,0,1,2,3,4,0]).all())
+ self.assert_((result.index.dayofweek == [4, 0, 1, 2, 3, 4, 0]).all())
self.assertEquals(result.irow(0), s['1/2/2005'])
self.assertEquals(result.irow(1), s['1/3/2005'])
self.assertEquals(result.irow(5), s['1/9/2005'])
@@ -189,19 +191,22 @@ def test_resample_loffset(self):
index=idx + timedelta(minutes=1))
assert_series_equal(result, expected)
- expected = s.resample('5min', how='mean', closed='right', label='right',
- loffset='1min')
+ expected = s.resample(
+ '5min', how='mean', closed='right', label='right',
+ loffset='1min')
assert_series_equal(result, expected)
- expected = s.resample('5min', how='mean', closed='right', label='right',
- loffset=Minute(1))
+ expected = s.resample(
+ '5min', how='mean', closed='right', label='right',
+ loffset=Minute(1))
assert_series_equal(result, expected)
self.assert_(result.index.freq == Minute(5))
# from daily
- dti = DatetimeIndex(start=datetime(2005,1,1), end=datetime(2005,1,10),
- freq='D')
+ dti = DatetimeIndex(
+ start=datetime(2005, 1, 1), end=datetime(2005, 1, 10),
+ freq='D')
ser = Series(np.random.rand(len(dti)), dti)
# to weekly
@@ -211,8 +216,9 @@ def test_resample_loffset(self):
def test_resample_upsample(self):
# from daily
- dti = DatetimeIndex(start=datetime(2005,1,1), end=datetime(2005,1,10),
- freq='D', name='index')
+ dti = DatetimeIndex(
+ start=datetime(2005, 1, 1), end=datetime(2005, 1, 10),
+ freq='D', name='index')
s = Series(np.random.rand(len(dti)), dti)
@@ -255,8 +261,9 @@ def test_resample_ohlc(self):
self.assertEquals(xs['close'], s[4])
def test_resample_reresample(self):
- dti = DatetimeIndex(start=datetime(2005,1,1), end=datetime(2005,1,10),
- freq='D')
+ dti = DatetimeIndex(
+ start=datetime(2005, 1, 1), end=datetime(2005, 1, 10),
+ freq='D')
s = Series(np.random.rand(len(dti)), dti)
bs = s.resample('B', closed='right', label='right')
result = bs.resample('8H')
@@ -388,7 +395,7 @@ def test_resample_anchored_ticks(self):
rng = date_range('1/1/2000 04:00:00', periods=86400, freq='s')
ts = Series(np.random.randn(len(rng)), index=rng)
- ts[:2] = np.nan # so results are the same
+ ts[:2] = np.nan # so results are the same
freqs = ['t', '5t', '15t', '30t', '4h', '12h']
for freq in freqs:
@@ -408,7 +415,7 @@ def test_resample_base(self):
def test_resample_daily_anchored(self):
rng = date_range('1/1/2000 0:00:00', periods=10000, freq='T')
ts = Series(np.random.randn(len(rng)), index=rng)
- ts[:2] = np.nan # so results are the same
+ ts[:2] = np.nan # so results are the same
result = ts[2:].resample('D', closed='left', label='left')
expected = ts.resample('D', closed='left', label='left')
@@ -417,7 +424,7 @@ def test_resample_daily_anchored(self):
def test_resample_to_period_monthly_buglet(self):
# GH #1259
- rng = date_range('1/1/2000','12/31/2000')
+ rng = date_range('1/1/2000', '12/31/2000')
ts = Series(np.random.randn(len(rng)), index=rng)
result = ts.resample('M', kind='period')
@@ -539,8 +546,8 @@ def test_resample_not_monotonic(self):
assert_series_equal(result, exp)
def test_resample_median_bug_1688(self):
- df = DataFrame([1, 2], index=[datetime(2012,1,1,0,0,0),
- datetime(2012,1,1,0,5,0)])
+ df = DataFrame([1, 2], index=[datetime(2012, 1, 1, 0, 0, 0),
+ datetime(2012, 1, 1, 0, 5, 0)])
result = df.resample("T", how=lambda x: x.mean())
exp = df.asfreq('T')
@@ -574,7 +581,7 @@ def test_resample_unequal_times(self):
# end hour is less than start
end = datetime(2012, 7, 31, 4)
bad_ind = date_range(start, end, freq="30min")
- df = DataFrame({'close':1}, index=bad_ind)
+ df = DataFrame({'close': 1}, index=bad_ind)
# it works!
df.resample('AS', 'sum')
@@ -584,6 +591,7 @@ def _simple_ts(start, end, freq='D'):
rng = date_range(start, end, freq=freq)
return Series(np.random.randn(len(rng)), index=rng)
+
def _simple_pts(start, end, freq='D'):
rng = period_range(start, end, freq=freq)
return TimeSeries(np.random.randn(len(rng)), index=rng)
@@ -687,7 +695,7 @@ def test_upsample_with_limit(self):
def test_annual_upsample(self):
ts = _simple_pts('1/1/1990', '12/31/1995', freq='A-DEC')
- df = DataFrame({'a' : ts})
+ df = DataFrame({'a': ts})
rdf = df.resample('D', fill_method='ffill')
exp = df['a'].resample('D', fill_method='ffill')
assert_series_equal(rdf['a'], exp)
@@ -866,9 +874,9 @@ def test_resample_tz_localized(self):
result = ts_local.resample('D')
# #2245
- idx = date_range('2001-09-20 15:59','2001-09-20 16:00', freq='T',
+ idx = date_range('2001-09-20 15:59', '2001-09-20 16:00', freq='T',
tz='Australia/Sydney')
- s = Series([1,2], index=idx)
+ s = Series([1, 2], index=idx)
result = s.resample('D', closed='right', label='right')
ex_index = date_range('2001-09-21', periods=1, freq='D',
@@ -890,12 +898,12 @@ def test_closed_left_corner(self):
freq='1min', periods=21))
s[0] = np.nan
- result = s.resample('10min', how='mean',closed='left', label='right')
- exp = s[1:].resample('10min', how='mean',closed='left', label='right')
+ result = s.resample('10min', how='mean', closed='left', label='right')
+ exp = s[1:].resample('10min', how='mean', closed='left', label='right')
assert_series_equal(result, exp)
- result = s.resample('10min', how='mean',closed='left', label='left')
- exp = s[1:].resample('10min', how='mean',closed='left', label='left')
+ result = s.resample('10min', how='mean', closed='left', label='left')
+ exp = s[1:].resample('10min', how='mean', closed='left', label='left')
ex_index = date_range(start='1/1/2012 9:30', freq='10min', periods=3)
@@ -1004,7 +1012,7 @@ def test_apply_iteration(self):
# #2300
N = 1000
ind = pd.date_range(start="2000-01-01", freq="D", periods=N)
- df = DataFrame({'open':1, 'close':2}, index=ind)
+ df = DataFrame({'open': 1, 'close': 2}, index=ind)
tg = TimeGrouper('M')
grouper = tg.get_grouper(df)
@@ -1020,7 +1028,7 @@ def test_apply_iteration(self):
def test_panel_aggregation(self):
ind = pd.date_range('1/1/2000', periods=100)
- data = np.random.randn(2,len(ind),4)
+ data = np.random.randn(2, len(ind), 4)
wp = pd.Panel(data, items=['Item1', 'Item2'], major_axis=ind,
minor_axis=['A', 'B', 'C', 'D'])
@@ -1037,5 +1045,5 @@ def f(x):
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 01e9197af3ac8..7ecf33d7e781a 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -43,6 +43,7 @@
class TestTimeSeriesDuplicates(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
datetime(2000, 1, 2), datetime(2000, 1, 3),
@@ -61,7 +62,7 @@ def test_is_unique_monotonic(self):
def test_index_unique(self):
uniques = self.dups.index.unique()
- self.assert_(uniques.dtype == 'M8[ns]') # sanity
+ self.assert_(uniques.dtype == 'M8[ns]') # sanity
# #2563
self.assertTrue(isinstance(uniques, DatetimeIndex))
@@ -74,7 +75,7 @@ def test_index_unique(self):
def test_index_dupes_contains(self):
d = datetime(2011, 12, 5, 20, 30)
- ix=DatetimeIndex([d,d])
+ ix = DatetimeIndex([d, d])
self.assertTrue(d in ix)
def test_duplicate_dates_indexing(self):
@@ -159,6 +160,7 @@ def test_indexing_over_size_cutoff(self):
finally:
_index._SIZE_CUTOFF = old_cutoff
+
def assert_range_equal(left, right):
assert(left.equals(right))
assert(left.freq == right.freq)
@@ -174,9 +176,10 @@ def _skip_if_no_pytz():
class TestTimeSeries(unittest.TestCase):
_multiprocess_can_split_ = True
+
def test_dti_slicing(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
- dti2 = dti[[1,3,5]]
+ dti2 = dti[[1, 3, 5]]
v1 = dti2[0]
v2 = dti2[1]
@@ -352,14 +355,14 @@ def test_frame_fillna_limit(self):
tm.assert_frame_equal(result, expected)
def test_frame_setitem_timestamp(self):
- #2155
+ # 2155
columns = DatetimeIndex(start='1/1/2012', end='2/1/2012',
freq=datetools.bday)
index = range(10)
data = DataFrame(columns=columns, index=index)
t = datetime(2012, 11, 1)
ts = Timestamp(t)
- data[ts] = np.nan #works
+ data[ts] = np.nan # works
def test_sparse_series_fillna_limit(self):
index = np.arange(10)
@@ -485,7 +488,7 @@ def test_frame_add_datetime64_col_other_units(self):
dtype = np.dtype('M8[%s]' % unit)
vals = np.arange(n, dtype=np.int64).view(dtype)
- df = DataFrame({'ints' : np.arange(n)}, index=np.arange(n))
+ df = DataFrame({'ints': np.arange(n)}, index=np.arange(n))
df[unit] = vals
ex_vals = to_datetime(vals.astype('O'))
@@ -494,7 +497,7 @@ def test_frame_add_datetime64_col_other_units(self):
self.assert_((df[unit].values == ex_vals).all())
# Test insertion into existing datetime64 column
- df = DataFrame({'ints' : np.arange(n)}, index=np.arange(n))
+ df = DataFrame({'ints': np.arange(n)}, index=np.arange(n))
df['dates'] = np.arange(n, dtype=np.int64).view(ns_dtype)
for unit in units:
@@ -585,7 +588,6 @@ def test_fillna_nat(self):
assert_frame_equal(filled, expected)
assert_frame_equal(filled2, expected)
-
series = Series([iNaT, 0, 1, 2], dtype='M8[ns]')
filled = series.fillna(method='bfill')
@@ -662,13 +664,13 @@ def test_to_datetime_iso8601(self):
exp = Timestamp("2012-01-01 00:00:00")
self.assert_(result[0] == exp)
- result = to_datetime(['20121001']) # bad iso 8601
+ result = to_datetime(['20121001']) # bad iso 8601
exp = Timestamp('2012-10-01')
self.assert_(result[0] == exp)
def test_to_datetime_default(self):
rs = to_datetime('2001')
- xp = datetime(2001,1,1)
+ xp = datetime(2001, 1, 1)
self.assert_(rs, xp)
def test_nat_vector_field_access(self):
@@ -981,7 +983,7 @@ def test_between_time(self):
expected = ts.between_time(stime, etime)
assert_series_equal(result, expected)
- #across midnight
+ # across midnight
rng = date_range('1/1/2000', '1/5/2000', freq='5min')
ts = Series(np.random.randn(len(rng)), index=rng)
stime = time(22, 0)
@@ -1041,7 +1043,7 @@ def test_between_time_frame(self):
expected = ts.between_time(stime, etime)
assert_frame_equal(result, expected)
- #across midnight
+ # across midnight
rng = date_range('1/1/2000', '1/5/2000', freq='5min')
ts = DataFrame(np.random.randn(len(rng), 2), index=rng)
stime = time(22, 0)
@@ -1213,8 +1215,8 @@ def test_astype_object(self):
def test_catch_infinite_loop(self):
offset = datetools.DateOffset(minute=5)
# blow up, don't loop forever
- self.assertRaises(Exception, date_range, datetime(2011,11,11),
- datetime(2011,11,12), freq=offset)
+ self.assertRaises(Exception, date_range, datetime(2011, 11, 11),
+ datetime(2011, 11, 12), freq=offset)
def test_append_concat(self):
rng = date_range('5/8/2012 1:45', periods=10, freq='5T')
@@ -1244,9 +1246,9 @@ def test_append_concat(self):
def test_set_dataframe_column_ns_dtype(self):
x = DataFrame([datetime.now(), datetime.now()])
- #self.assert_(x[0].dtype == object)
+ # self.assert_(x[0].dtype == object)
- #x[0] = to_datetime(x[0])
+ # x[0] = to_datetime(x[0])
self.assert_(x[0].dtype == np.dtype('M8[ns]'))
def test_groupby_count_dateparseerror(self):
@@ -1298,9 +1300,9 @@ def test_series_interpolate_method_values(self):
def test_frame_datetime64_handling_groupby(self):
# it works!
- df = DataFrame([(3,np.datetime64('2012-07-03')),
- (3,np.datetime64('2012-07-04'))],
- columns = ['a', 'date'])
+ df = DataFrame([(3, np.datetime64('2012-07-03')),
+ (3, np.datetime64('2012-07-04'))],
+ columns=['a', 'date'])
result = df.groupby('a').first()
self.assertEqual(result['date'][3], np.datetime64('2012-07-03'))
@@ -1388,6 +1390,7 @@ def test_to_csv_numpy_16_bug(self):
result = buf.getvalue()
self.assert_('2000-01-01' in result)
+
def _simple_ts(start, end, freq='D'):
rng = date_range(start, end, freq=freq)
return Series(np.random.randn(len(rng)), index=rng)
@@ -1395,6 +1398,7 @@ def _simple_ts(start, end, freq='D'):
class TestDatetimeIndex(unittest.TestCase):
_multiprocess_can_split_ = True
+
def test_append_join_nondatetimeindex(self):
rng = date_range('1/1/2000', periods=10)
idx = Index(['a', 'b', 'c', 'd'])
@@ -1420,13 +1424,12 @@ def test_to_period_nofreq(self):
idx.to_period()
def test_000constructor_resolution(self):
- #2252
- t1 = Timestamp((1352934390*1000000000)+1000000+1000+1)
+ # 2252
+ t1 = Timestamp((1352934390 * 1000000000) + 1000000 + 1000 + 1)
idx = DatetimeIndex([t1])
self.assert_(idx.nanosecond[0] == t1.nanosecond)
-
def test_constructor_coverage(self):
rng = date_range('1/1/2000', periods=10.5)
exp = date_range('1/1/2000', periods=10)
@@ -1601,7 +1604,7 @@ def test_map_bug_1677(self):
def test_groupby_function_tuple_1677(self):
df = DataFrame(np.random.rand(100),
index=date_range("1/1/2000", periods=100))
- monthly_group = df.groupby(lambda x: (x.year,x.month))
+ monthly_group = df.groupby(lambda x: (x.year, x.month))
result = monthly_group.mean()
self.assert_(isinstance(result.index[0], tuple))
@@ -1636,11 +1639,13 @@ def test_union(self):
def test_union_with_DatetimeIndex(self):
i1 = Int64Index(np.arange(0, 20, 2))
i2 = DatetimeIndex(start='2012-01-03 00:00:00', periods=10, freq='D')
- i1.union(i2) # Works
- i2.union(i1) # Fails with "AttributeError: can't set attribute"
+ i1.union(i2) # Works
+ i2.union(i1) # Fails with "AttributeError: can't set attribute"
+
class TestLegacySupport(unittest.TestCase):
_multiprocess_can_split_ = True
+
@classmethod
def setUpClass(cls):
if py3compat.PY3:
@@ -1843,7 +1848,7 @@ def test_rule_aliases(self):
self.assert_(rule == datetools.Micro(10))
def test_slice_year(self):
- dti = DatetimeIndex(freq='B', start=datetime(2005,1,1), periods=500)
+ dti = DatetimeIndex(freq='B', start=datetime(2005, 1, 1), periods=500)
s = Series(np.arange(len(dti)), index=dti)
result = s['2005']
@@ -1862,7 +1867,7 @@ def test_slice_year(self):
self.assert_(result == expected)
def test_slice_quarter(self):
- dti = DatetimeIndex(freq='D', start=datetime(2000,6,1), periods=500)
+ dti = DatetimeIndex(freq='D', start=datetime(2000, 6, 1), periods=500)
s = Series(np.arange(len(dti)), index=dti)
self.assertEquals(len(s['2001Q1']), 90)
@@ -1871,7 +1876,7 @@ def test_slice_quarter(self):
self.assertEquals(len(df.ix['1Q01']), 90)
def test_slice_month(self):
- dti = DatetimeIndex(freq='D', start=datetime(2005,1,1), periods=500)
+ dti = DatetimeIndex(freq='D', start=datetime(2005, 1, 1), periods=500)
s = Series(np.arange(len(dti)), index=dti)
self.assertEquals(len(s['2005-11']), 30)
@@ -1881,7 +1886,7 @@ def test_slice_month(self):
assert_series_equal(s['2005-11'], s['11-2005'])
def test_partial_slice(self):
- rng = DatetimeIndex(freq='D', start=datetime(2005,1,1), periods=500)
+ rng = DatetimeIndex(freq='D', start=datetime(2005, 1, 1), periods=500)
s = Series(np.arange(len(rng)), index=rng)
result = s['2005-05':'2006-02']
@@ -1902,7 +1907,7 @@ def test_partial_slice(self):
self.assertRaises(Exception, s.__getitem__, '2004-12-31')
def test_partial_slice_daily(self):
- rng = DatetimeIndex(freq='H', start=datetime(2005,1,31), periods=500)
+ rng = DatetimeIndex(freq='H', start=datetime(2005, 1, 31), periods=500)
s = Series(np.arange(len(rng)), index=rng)
result = s['2005-1-31']
@@ -1911,12 +1916,12 @@ def test_partial_slice_daily(self):
self.assertRaises(Exception, s.__getitem__, '2004-12-31 00')
def test_partial_slice_hourly(self):
- rng = DatetimeIndex(freq='T', start=datetime(2005,1,1,20,0,0),
+ rng = DatetimeIndex(freq='T', start=datetime(2005, 1, 1, 20, 0, 0),
periods=500)
s = Series(np.arange(len(rng)), index=rng)
result = s['2005-1-1']
- assert_series_equal(result, s.ix[:60*4])
+ assert_series_equal(result, s.ix[:60 * 4])
result = s['2005-1-1 20']
assert_series_equal(result, s.ix[:60])
@@ -1925,7 +1930,7 @@ def test_partial_slice_hourly(self):
self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:15')
def test_partial_slice_minutely(self):
- rng = DatetimeIndex(freq='S', start=datetime(2005,1,1,23,59,0),
+ rng = DatetimeIndex(freq='S', start=datetime(2005, 1, 1, 23, 59, 0),
periods=500)
s = Series(np.arange(len(rng)), index=rng)
@@ -1939,7 +1944,7 @@ def test_partial_slice_minutely(self):
self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:00:00')
def test_partial_not_monotonic(self):
- rng = date_range(datetime(2005,1,1), periods=20, freq='M')
+ rng = date_range(datetime(2005, 1, 1), periods=20, freq='M')
ts = Series(np.arange(len(rng)), index=rng)
ts = ts.take(np.random.permutation(20))
@@ -1957,7 +1962,8 @@ def test_date_range_normalize(self):
self.assert_(np.array_equal(rng, values))
- rng = date_range('1/1/2000 08:15', periods=n, normalize=False, freq='B')
+ rng = date_range(
+ '1/1/2000 08:15', periods=n, normalize=False, freq='B')
the_time = time(8, 15)
for val in rng:
self.assert_(val.time() == the_time)
@@ -2029,9 +2035,9 @@ def test_min_max(self):
def test_min_max_series(self):
rng = date_range('1/1/2000', periods=10, freq='4h')
- lvls = ['A','A','A','B','B','B','C','C','C','C']
- df = DataFrame({'TS': rng, 'V' : np.random.randn(len(rng)),
- 'L' : lvls})
+ lvls = ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C']
+ df = DataFrame({'TS': rng, 'V': np.random.randn(len(rng)),
+ 'L': lvls})
result = df.TS.max()
exp = Timestamp(df.TS.iget(-1))
@@ -2044,8 +2050,8 @@ def test_min_max_series(self):
self.assertEqual(result, exp)
def test_from_M8_structured(self):
- dates = [ (datetime(2012, 9, 9, 0, 0),
- datetime(2012, 9, 8, 15, 10))]
+ dates = [(datetime(2012, 9, 9, 0, 0),
+ datetime(2012, 9, 8, 15, 10))]
arr = np.array(dates,
dtype=[('Date', 'M8[us]'), ('Forecasting', 'M8[us]')])
df = DataFrame(arr)
@@ -2074,10 +2080,10 @@ def test_get_level_values_box(self):
def test_frame_apply_dont_convert_datetime64(self):
from pandas.tseries.offsets import BDay
- df = DataFrame({'x1': [datetime(1996,1,1)]})
+ df = DataFrame({'x1': [datetime(1996, 1, 1)]})
- df = df.applymap(lambda x: x+BDay())
- df = df.applymap(lambda x: x+BDay())
+ df = df.applymap(lambda x: x + BDay())
+ df = df.applymap(lambda x: x + BDay())
self.assertTrue(df.x1.dtype == 'M8[ns]')
@@ -2128,14 +2134,14 @@ class TestDatetime64(unittest.TestCase):
Also test supoprt for datetime64[ns] in Series / DataFrame
"""
-
def setUp(self):
- dti = DatetimeIndex(start=datetime(2005,1,1),
- end=datetime(2005,1,10), freq='Min')
+ dti = DatetimeIndex(start=datetime(2005, 1, 1),
+ end=datetime(2005, 1, 10), freq='Min')
self.series = Series(rand(len(dti)), dti)
def test_datetimeindex_accessors(self):
- dti = DatetimeIndex(freq='Q-JAN', start=datetime(1997,12,31), periods=100)
+ dti = DatetimeIndex(
+ freq='Q-JAN', start=datetime(1997, 12, 31), periods=100)
self.assertEquals(dti.year[0], 1998)
self.assertEquals(dti.month[0], 1)
@@ -2173,31 +2179,31 @@ def test_nanosecond_field(self):
self.assert_(np.array_equal(dti.nanosecond, np.arange(10)))
def test_datetimeindex_diff(self):
- dti1 = DatetimeIndex(freq='Q-JAN', start=datetime(1997,12,31),
+ dti1 = DatetimeIndex(freq='Q-JAN', start=datetime(1997, 12, 31),
periods=100)
- dti2 = DatetimeIndex(freq='Q-JAN', start=datetime(1997,12,31),
+ dti2 = DatetimeIndex(freq='Q-JAN', start=datetime(1997, 12, 31),
periods=98)
- self.assert_( len(dti1.diff(dti2)) == 2)
+ self.assert_(len(dti1.diff(dti2)) == 2)
def test_fancy_getitem(self):
- dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005,1,1),
- end=datetime(2010,1,1))
+ dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
+ end=datetime(2010, 1, 1))
s = Series(np.arange(len(dti)), index=dti)
self.assertEquals(s[48], 48)
self.assertEquals(s['1/2/2009'], 48)
self.assertEquals(s['2009-1-2'], 48)
- self.assertEquals(s[datetime(2009,1,2)], 48)
- self.assertEquals(s[lib.Timestamp(datetime(2009,1,2))], 48)
+ self.assertEquals(s[datetime(2009, 1, 2)], 48)
+ self.assertEquals(s[lib.Timestamp(datetime(2009, 1, 2))], 48)
self.assertRaises(KeyError, s.__getitem__, '2009-1-3')
assert_series_equal(s['3/6/2009':'2009-06-05'],
- s[datetime(2009,3,6):datetime(2009,6,5)])
+ s[datetime(2009, 3, 6):datetime(2009, 6, 5)])
def test_fancy_setitem(self):
- dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005,1,1),
- end=datetime(2010,1,1))
+ dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
+ end=datetime(2010, 1, 1))
s = Series(np.arange(len(dti)), index=dti)
s[48] = -1
@@ -2214,10 +2220,10 @@ def test_datetimeindex_constructor(self):
arr = ['1/1/2005', '1/2/2005', '1/3/2005', '2005-01-04']
idx1 = DatetimeIndex(arr)
- arr = [datetime(2005,1,1), '1/2/2005', '1/3/2005', '2005-01-04']
+ arr = [datetime(2005, 1, 1), '1/2/2005', '1/3/2005', '2005-01-04']
idx2 = DatetimeIndex(arr)
- arr = [lib.Timestamp(datetime(2005,1,1)), '1/2/2005', '1/3/2005',
+ arr = [lib.Timestamp(datetime(2005, 1, 1)), '1/2/2005', '1/3/2005',
'2005-01-04']
idx3 = DatetimeIndex(arr)
@@ -2228,7 +2234,8 @@ def test_datetimeindex_constructor(self):
arr = to_datetime(['1/1/2005', '1/2/2005', '1/3/2005', '2005-01-04'])
idx5 = DatetimeIndex(arr)
- arr = to_datetime(['1/1/2005', '1/2/2005', 'Jan 3, 2005', '2005-01-04'])
+ arr = to_datetime(
+ ['1/1/2005', '1/2/2005', 'Jan 3, 2005', '2005-01-04'])
idx6 = DatetimeIndex(arr)
idx7 = DatetimeIndex(['12/05/2007', '25/01/2008'], dayfirst=True)
@@ -2237,7 +2244,7 @@ def test_datetimeindex_constructor(self):
self.assert_(idx7.equals(idx8))
for other in [idx2, idx3, idx4, idx5, idx6]:
- self.assert_( (idx1.values == other.values).all() )
+ self.assert_((idx1.values == other.values).all())
sdate = datetime(1999, 12, 25)
edate = datetime(2000, 1, 1)
@@ -2276,17 +2283,17 @@ def test_dti_snap(self):
res = dti.snap(freq='W-MON')
exp = date_range('12/31/2001', '1/7/2002', freq='w-mon')
exp = exp.repeat([3, 4])
- self.assert_( (res == exp).all() )
+ self.assert_((res == exp).all())
res = dti.snap(freq='B')
exp = date_range('1/1/2002', '1/7/2002', freq='b')
exp = exp.repeat([1, 1, 1, 2, 2])
- self.assert_( (res == exp).all() )
+ self.assert_((res == exp).all())
def test_dti_reset_index_round_trip(self):
dti = DatetimeIndex(start='1/1/2001', end='6/1/2001', freq='D')
- d1 = DataFrame({'v' : np.random.rand(len(dti))}, index=dti)
+ d1 = DataFrame({'v': np.random.rand(len(dti))}, index=dti)
d2 = d1.reset_index()
self.assert_(d2.dtypes[0] == np.dtype('M8[ns]'))
d3 = d2.set_index('index')
@@ -2333,7 +2340,7 @@ def test_slice_locs_indexerror(self):
times = [datetime(2000, 1, 1) + timedelta(minutes=i * 10)
for i in range(100000)]
s = Series(range(100000), times)
- s.ix[datetime(1900,1,1):datetime(2100,1,1)]
+ s.ix[datetime(1900, 1, 1):datetime(2100, 1, 1)]
class TestSeriesDatetime64(unittest.TestCase):
@@ -2402,13 +2409,13 @@ def test_intercept_astype_object(self):
assert_series_equal(result2, expected)
df = DataFrame({'a': self.series,
- 'b' : np.random.randn(len(self.series))})
+ 'b': np.random.randn(len(self.series))})
result = df.values.squeeze()
self.assert_((result[:, 0] == expected.values).all())
df = DataFrame({'a': self.series,
- 'b' : ['foo'] * len(self.series)})
+ 'b': ['foo'] * len(self.series)})
result = df.values.squeeze()
self.assert_((result[:, 0] == expected.values).all())
@@ -2419,7 +2426,7 @@ def test_union(self):
rng2 = date_range('1/1/1980', '12/1/2001', freq='MS')
s2 = Series(np.random.randn(len(rng2)), rng2)
- df = DataFrame({'s1' : s1, 's2' : s2})
+ df = DataFrame({'s1': s1, 's2': s2})
self.assert_(df.index.values.dtype == np.dtype('M8[ns]'))
def test_intersection(self):
@@ -2432,7 +2439,7 @@ def test_intersection(self):
result = rng.intersection(rng2)
self.assert_(result.equals(rng))
- #empty same freq GH2129
+ # empty same freq GH2129
rng = date_range('6/1/2000', '6/15/2000', freq='T')
result = rng[0:0].intersection(rng)
self.assert_(len(result) == 0)
@@ -2458,6 +2465,7 @@ def test_string_index_series_name_converted(self):
result = df.T['1/3/2000']
self.assertEquals(result.name, df.index[2])
+
class TestTimestamp(unittest.TestCase):
def test_basics_nanos(self):
@@ -2513,7 +2521,7 @@ def test_cant_compare_tz_naive_w_aware(self):
self.assertRaises(Exception, b.__lt__, a)
self.assertRaises(Exception, b.__gt__, a)
- if sys.version_info < (3,3):
+ if sys.version_info < (3, 3):
self.assertRaises(Exception, a.__eq__, b.to_pydatetime())
self.assertRaises(Exception, a.to_pydatetime().__eq__, b)
else:
@@ -2554,10 +2562,10 @@ def test_frequency_misc(self):
self.assertEquals(result, 'H')
def test_hash_equivalent(self):
- d = {datetime(2011, 1, 1) : 5}
+ d = {datetime(2011, 1, 1): 5}
stamp = Timestamp(datetime(2011, 1, 1))
self.assertEquals(d[stamp], 5)
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 58c6b68e3f357..afba12dd02d09 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -43,11 +43,12 @@ def _skip_if_no_pytz():
except ImportError:
pass
+
class FixedOffset(tzinfo):
"""Fixed offset in minutes east from UTC."""
def __init__(self, offset, name):
- self.__offset = timedelta(minutes = offset)
+ self.__offset = timedelta(minutes=offset)
self.__name = name
def utcoffset(self, dt):
@@ -62,8 +63,10 @@ def dst(self, dt):
fixed_off = FixedOffset(-420, '-07:00')
fixed_off_no_name = FixedOffset(-330, None)
+
class TestTimeZoneSupport(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
_skip_if_no_pytz()
@@ -100,7 +103,7 @@ def test_timestamp_tz_localize(self):
self.assertEquals(result, expected)
def test_timestamp_to_datetime_tzoffset(self):
- #tzoffset
+ # tzoffset
from dateutil.tz import tzoffset
tzinfo = tzoffset(None, 7200)
expected = Timestamp('3/11/2012 04:00', tz=tzinfo)
@@ -143,7 +146,8 @@ def test_tz_localize_dti(self):
dti = DatetimeIndex(start='3/13/2011 1:59', end='3/13/2011 2:00',
freq='L')
- self.assertRaises(pytz.NonExistentTimeError, dti.tz_localize, 'US/Eastern')
+ self.assertRaises(
+ pytz.NonExistentTimeError, dti.tz_localize, 'US/Eastern')
def test_tz_localize_empty_series(self):
# #2248
@@ -167,7 +171,8 @@ def test_create_with_tz(self):
stamp = Timestamp('3/11/2012 05:00', tz='US/Eastern')
self.assertEquals(stamp.hour, 5)
- rng = date_range('3/11/2012 04:00', periods=10, freq='H', tz='US/Eastern')
+ rng = date_range(
+ '3/11/2012 04:00', periods=10, freq='H', tz='US/Eastern')
self.assertEquals(stamp, rng[1])
@@ -188,7 +193,8 @@ def test_create_with_fixed_tz(self):
rng2 = date_range(start, periods=len(rng), tz=off)
self.assert_(rng.equals(rng2))
- rng3 = date_range('3/11/2012 05:00:00+07:00', '6/11/2012 05:00:00+07:00')
+ rng3 = date_range(
+ '3/11/2012 05:00:00+07:00', '6/11/2012 05:00:00+07:00')
self.assert_((rng.values == rng3.values).all())
def test_create_with_fixedoffset_noname(self):
@@ -202,7 +208,8 @@ def test_create_with_fixedoffset_noname(self):
self.assertEqual(off, idx.tz)
def test_date_range_localize(self):
- rng = date_range('3/11/2012 03:00', periods=15, freq='H', tz='US/Eastern')
+ rng = date_range(
+ '3/11/2012 03:00', periods=15, freq='H', tz='US/Eastern')
rng2 = DatetimeIndex(['3/11/2012 03:00', '3/11/2012 04:00'],
tz='US/Eastern')
rng3 = date_range('3/11/2012 03:00', periods=15, freq='H')
@@ -220,7 +227,8 @@ def test_date_range_localize(self):
self.assert_(rng[:2].equals(rng2))
# Right before the DST transition
- rng = date_range('3/11/2012 00:00', periods=2, freq='H', tz='US/Eastern')
+ rng = date_range(
+ '3/11/2012 00:00', periods=2, freq='H', tz='US/Eastern')
rng2 = DatetimeIndex(['3/11/2012 00:00', '3/11/2012 01:00'],
tz='US/Eastern')
self.assert_(rng.equals(rng2))
@@ -461,7 +469,7 @@ def test_to_datetime_tzlocal(self):
from dateutil.parser import parse
from dateutil.tz import tzlocal
dt = parse('2012-06-13T01:39:00Z')
- dt = dt.replace(tzinfo = tzlocal())
+ dt = dt.replace(tzinfo=tzlocal())
arr = np.array([dt], dtype=object)
@@ -481,7 +489,8 @@ def test_frame_no_datetime64_dtype(self):
def test_hongkong_tz_convert(self):
# #1673
- dr = date_range('2012-01-01','2012-01-10',freq = 'D', tz = 'Hongkong')
+ dr = date_range(
+ '2012-01-01', '2012-01-10', freq='D', tz='Hongkong')
# it works!
dr.hour
@@ -502,7 +511,8 @@ def test_shift_localized(self):
self.assert_(result.tz == dr_tz.tz)
def test_tz_aware_asfreq(self):
- dr = date_range('2011-12-01','2012-07-20',freq = 'D', tz = 'US/Eastern')
+ dr = date_range(
+ '2011-12-01', '2012-07-20', freq='D', tz='US/Eastern')
s = Series(np.random.randn(len(dr)), index=dr)
@@ -543,7 +553,7 @@ def test_convert_datetime_list(self):
def test_frame_from_records_utc(self):
rec = {'datum': 1.5,
- 'begin_time' : datetime(2006, 4, 27, tzinfo=pytz.utc)}
+ 'begin_time': datetime(2006, 4, 27, tzinfo=pytz.utc)}
# it works
DataFrame.from_records([rec], index='begin_time')
@@ -575,13 +585,14 @@ def test_getitem_pydatetime_tz(self):
tz='Europe/Berlin')
ts = Series(index=index, data=index.hour)
time_pandas = Timestamp('2012-12-24 17:00', tz='Europe/Berlin')
- time_datetime = datetime(2012,12,24,17,00,
+ time_datetime = datetime(2012, 12, 24, 17, 00,
tzinfo=pytz.timezone('Europe/Berlin'))
self.assertEqual(ts[time_pandas], ts[time_datetime])
class TestTimeZones(unittest.TestCase):
_multiprocess_can_split_ = True
+
def setUp(self):
_skip_if_no_pytz()
@@ -672,13 +683,13 @@ def test_join_aware(self):
self.assertRaises(Exception, ts.__add__, ts_utc)
self.assertRaises(Exception, ts_utc.__add__, ts)
- test1 = DataFrame(np.zeros((6,3)),
+ test1 = DataFrame(np.zeros((6, 3)),
index=date_range("2012-11-15 00:00:00", periods=6,
freq="100L", tz="US/Central"))
- test2 = DataFrame(np.zeros((3,3)),
+ test2 = DataFrame(np.zeros((3, 3)),
index=date_range("2012-11-15 00:00:00", periods=3,
freq="250L", tz="US/Central"),
- columns=range(3,6))
+ columns=range(3, 6))
result = test1.join(test2, how='outer')
ex_index = test1.index.union(test2.index)
@@ -691,7 +702,7 @@ def test_join_aware(self):
freq="H", tz="US/Central")
rng2 = date_range("2012-11-15 12:00:00", periods=6,
- freq="H", tz="US/Eastern")
+ freq="H", tz="US/Eastern")
result = rng.union(rng2)
self.assertTrue(result.tz.zone == 'UTC')
@@ -742,9 +753,9 @@ def test_append_aware_naive(self):
ts2 = Series(np.random.randn(len(rng2)), index=rng2)
ts_result = ts1.append(ts2)
self.assert_(ts_result.index.equals(
- ts1.index.asobject.append(ts2.index.asobject)))
+ ts1.index.asobject.append(ts2.index.asobject)))
- #mixed
+ # mixed
rng1 = date_range('1/1/2011 01:00', periods=1, freq='H')
rng2 = range(100)
@@ -752,7 +763,7 @@ def test_append_aware_naive(self):
ts2 = Series(np.random.randn(len(rng2)), index=rng2)
ts_result = ts1.append(ts2)
self.assert_(ts_result.index.equals(
- ts1.index.asobject.append(ts2.index)))
+ ts1.index.asobject.append(ts2.index)))
def test_equal_join_ensure_utc(self):
rng = date_range('1/1/2011', periods=10, freq='H', tz='US/Eastern')
@@ -860,5 +871,5 @@ def test_normalize_tz(self):
self.assert_(not rng.is_normalized)
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_util.py b/pandas/tseries/tests/test_util.py
index 997400bf73b91..09dad264b7ae0 100644
--- a/pandas/tseries/tests/test_util.py
+++ b/pandas/tseries/tests/test_util.py
@@ -12,6 +12,7 @@
from pandas.tseries.tools import normalize_date
from pandas.tseries.util import pivot_annual, isleapyear
+
class TestPivotAnnual(unittest.TestCase):
"""
New pandas of scikits.timeseries pivot_annual
@@ -38,7 +39,8 @@ def test_daily(self):
tm.assert_series_equal(annual[day].dropna(), leaps)
def test_hourly(self):
- rng_hourly = date_range('1/1/1994', periods=(18* 8760 + 4*24), freq='H')
+ rng_hourly = date_range(
+ '1/1/1994', periods=(18 * 8760 + 4 * 24), freq='H')
data_hourly = np.random.randint(100, 350, rng_hourly.size)
ts_hourly = Series(data_hourly, index=rng_hourly)
@@ -101,5 +103,5 @@ def test_normalize_date():
assert(result == datetime(2012, 9, 7))
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index b724e5eb195ab..671769138d21e 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -16,7 +16,7 @@
# raise exception if dateutil 2.0 install on 2.x platform
if (sys.version_info[0] == 2 and
- dateutil.__version__ == '2.0'): # pragma: no cover
+ dateutil.__version__ == '2.0'): # pragma: no cover
raise Exception('dateutil 2.0 incompatible with Python 2.x, you must '
'install version 1.5 or 2.1+!')
except ImportError: # pragma: no cover
@@ -116,7 +116,7 @@ def _convert_f(arg):
try:
if not arg:
return arg
- default = datetime(1,1,1)
+ default = datetime(1, 1, 1)
return parse(arg, dayfirst=dayfirst, default=default)
except Exception:
if errors == 'raise':
@@ -167,7 +167,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
arg = arg.upper()
default = datetime(1, 1, 1).replace(hour=0, minute=0,
- second=0, microsecond=0)
+ second=0, microsecond=0)
# special handling for possibilities eg, 2Q2005, 2Q05, 2005Q1, 05Q1
if len(arg) in [4, 6]:
@@ -240,6 +240,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
return parsed, parsed, reso # datetime, resolution
+
def dateutil_parse(timestr, default,
ignoretz=False, tzinfos=None,
**kwargs):
@@ -247,7 +248,7 @@ def dateutil_parse(timestr, default,
res = DEFAULTPARSER._parse(StringIO(timestr), **kwargs)
if res is None:
- raise ValueError, "unknown string format"
+ raise ValueError("unknown string format")
repl = {}
for attr in ["year", "month", "day", "hour",
@@ -261,7 +262,7 @@ def dateutil_parse(timestr, default,
ret = default.replace(**repl)
if res.weekday is not None and not res.day:
- ret = ret+relativedelta.relativedelta(weekday=res.weekday)
+ ret = ret + relativedelta.relativedelta(weekday=res.weekday)
if not ignoretz:
if callable(tzinfos) or tzinfos and res.tzname in tzinfos:
if callable(tzinfos):
@@ -275,8 +276,8 @@ def dateutil_parse(timestr, default,
elif isinstance(tzdata, int):
tzinfo = tz.tzoffset(res.tzname, tzdata)
else:
- raise ValueError, "offset must be tzinfo subclass, " \
- "tz string, or int offset"
+ raise ValueError("offset must be tzinfo subclass, "
+ "tz string, or int offset")
ret = ret.replace(tzinfo=tzinfo)
elif res.tzname and res.tzname in time.tzname:
ret = ret.replace(tzinfo=tz.tzlocal())
@@ -286,6 +287,7 @@ def dateutil_parse(timestr, default,
ret = ret.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset))
return ret, reso
+
def _attempt_monthly(val):
pats = ['%Y-%m', '%m-%Y', '%b %Y', '%b-%Y']
for pat in pats:
diff --git a/pandas/util/clipboard.py b/pandas/util/clipboard.py
index 4136df072c6b6..bc58af8c0ea3c 100644
--- a/pandas/util/clipboard.py
+++ b/pandas/util/clipboard.py
@@ -49,7 +49,7 @@ def win32_clipboard_get():
import win32clipboard
except ImportError:
message = ("Getting text from the clipboard requires the pywin32 "
- "extensions: http://sourceforge.net/projects/pywin32/")
+ "extensions: http://sourceforge.net/projects/pywin32/")
raise Exception(message)
win32clipboard.OpenClipboard()
text = win32clipboard.GetClipboardData(win32clipboard.CF_TEXT)
@@ -62,7 +62,7 @@ def osx_clipboard_get():
""" Get the clipboard's text on OS X.
"""
p = subprocess.Popen(['pbpaste', '-Prefer', 'ascii'],
- stdout=subprocess.PIPE)
+ stdout=subprocess.PIPE)
text, stderr = p.communicate()
# Text comes in with old Mac \r line endings. Change them to \n.
text = text.replace('\r', '\n')
@@ -80,7 +80,7 @@ def tkinter_clipboard_get():
import Tkinter
except ImportError:
message = ("Getting text from the clipboard on this platform "
- "requires Tkinter.")
+ "requires Tkinter.")
raise Exception(message)
root = Tkinter.Tk()
root.withdraw()
@@ -109,7 +109,7 @@ def osx_clipboard_set(text):
""" Get the clipboard's text on OS X.
"""
p = subprocess.Popen(['pbcopy', '-Prefer', 'ascii'],
- stdin=subprocess.PIPE)
+ stdin=subprocess.PIPE)
p.communicate(input=text)
diff --git a/pandas/util/compat.py b/pandas/util/compat.py
index 24cc28f1193d1..41055f48c2fac 100644
--- a/pandas/util/compat.py
+++ b/pandas/util/compat.py
@@ -34,7 +34,8 @@ class _OrderedDict(dict):
# An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
- # Big-O running times for all methods are the same as for regular dictionaries.
+ # Big-O running times for all methods are the same as for regular
+ # dictionaries.
# The internal self.__map dictionary maps keys to links in a doubly linked list.
# The circular doubly linked list starts and ends with a sentinel element.
@@ -60,7 +61,8 @@ def __init__(self, *args, **kwds):
def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
'od.__setitem__(i, y) <==> od[i]=y'
# Setting a new item creates a new link which goes at the end of the linked
- # list, and the inherited dictionary is updated with the new key/value pair.
+ # list, and the inherited dictionary is updated with the new key/value
+ # pair.
if key not in self:
root = self.__root
last = root[0]
@@ -70,7 +72,8 @@ def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
def __delitem__(self, key, dict_delitem=dict.__delitem__):
'od.__delitem__(y) <==> del od[y]'
# Deleting an existing item uses self.__map to find the link which is
- # then removed by updating the links in the predecessor and successor nodes.
+ # then removed by updating the links in the predecessor and successor
+ # nodes.
dict_delitem(self, key)
link_prev, link_next, key = self.__map.pop(key)
link_prev[1] = link_next
@@ -254,7 +257,7 @@ def __eq__(self, other):
'''
if isinstance(other, OrderedDict):
- return len(self)==len(other) and self.items() == other.items()
+ return len(self) == len(other) and self.items() == other.items()
return dict.__eq__(self, other)
def __ne__(self, other):
@@ -284,6 +287,7 @@ def viewitems(self):
except ImportError:
pass
+
class _Counter(dict):
'''Dict subclass for counting hashable objects. Sometimes called a bag
or multiset. Elements are stored as dictionary keys and their counts
@@ -364,7 +368,8 @@ def update(self, iterable=None, **kwds):
for elem, count in iterable.iteritems():
self[elem] = self_get(elem, 0) + count
else:
- dict.update(self, iterable) # fast path when counter is empty
+ dict.update(
+ self, iterable) # fast path when counter is empty
else:
self_get = self.get
for elem in iterable:
@@ -465,8 +470,8 @@ def __and__(self, other):
result[elem] = newcount
return result
-if sys.version_info[:2] < (2,7):
- OrderedDict=_OrderedDict
- Counter=_Counter
+if sys.version_info[:2] < (2, 7):
+ OrderedDict = _OrderedDict
+ Counter = _Counter
else:
from collections import OrderedDict, Counter
diff --git a/pandas/util/decorators.py b/pandas/util/decorators.py
index bef3ffc569df1..15ab39d07ec4d 100644
--- a/pandas/util/decorators.py
+++ b/pandas/util/decorators.py
@@ -46,7 +46,8 @@ def some_function(x):
"%s %s wrote the Raven"
"""
def __init__(self, *args, **kwargs):
- assert not (args and kwargs), "Only positional or keyword args are allowed"
+ assert not (
+ args and kwargs), "Only positional or keyword args are allowed"
self.params = args or kwargs
def __call__(self, func):
@@ -173,7 +174,7 @@ def knownfail_decorator(f):
def knownfailer(*args, **kwargs):
if fail_val():
- raise KnownFailureTest, msg
+ raise KnownFailureTest(msg)
else:
return f(*args, **kwargs)
return nose.tools.make_decorator(f)(knownfailer)
diff --git a/pandas/util/misc.py b/pandas/util/misc.py
index 25edfb7453e27..8372ba56d00cd 100644
--- a/pandas/util/misc.py
+++ b/pandas/util/misc.py
@@ -1,4 +1,3 @@
def exclusive(*args):
count = sum([arg is not None for arg in args])
return count == 1
-
diff --git a/pandas/util/py3compat.py b/pandas/util/py3compat.py
index c72133c80046a..dcc877b094dda 100644
--- a/pandas/util/py3compat.py
+++ b/pandas/util/py3compat.py
@@ -38,4 +38,3 @@ def bytes_to_str(b, encoding='ascii'):
from io import BytesIO
except:
from cStringIO import StringIO as BytesIO
-
diff --git a/pandas/util/terminal.py b/pandas/util/terminal.py
index 312f54b521e90..fbdb239db30ae 100644
--- a/pandas/util/terminal.py
+++ b/pandas/util/terminal.py
@@ -27,8 +27,8 @@ def get_terminal_size():
tuple_xy = _get_terminal_size_tput()
# needed for window's python in cygwin's xterm!
if current_os == 'Linux' or \
- current_os == 'Darwin' or \
- current_os.startswith('CYGWIN'):
+ current_os == 'Darwin' or \
+ current_os.startswith('CYGWIN'):
tuple_xy = _get_terminal_size_linux()
if tuple_xy is None:
tuple_xy = (80, 25) # default value
@@ -52,7 +52,7 @@ def _get_terminal_size_windows():
if res:
import struct
(bufx, bufy, curx, cury, wattr, left, top, right, bottom, maxx,
- maxy) = struct.unpack("hhhhHhhhhhh", csbi.raw)
+ maxy) = struct.unpack("hhhhHhhhhhh", csbi.raw)
sizex = right - left + 1
sizey = bottom - top + 1
return sizex, sizey
@@ -88,7 +88,8 @@ def ioctl_GWINSZ(fd):
import termios
import struct
import os
- cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
+ cr = struct.unpack(
+ 'hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
except:
return None
return cr
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 45022353c1ccd..a9a6bab893ac1 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -114,7 +114,7 @@ def assert_almost_equal(a, b):
if isinstance(a, (bool, float, int)):
if np.isinf(a):
- assert np.isinf(b), err_msg(a,b)
+ assert np.isinf(b), err_msg(a, b)
# case for zero
elif abs(a) < 1e-5:
np.testing.assert_almost_equal(
@@ -202,6 +202,7 @@ def assert_panel_equal(left, right, check_panel_type=False):
for col in right:
assert(col in left)
+
def assert_panel4d_equal(left, right):
assert(left.labels.equals(right.labels))
assert(left.items.equals(right.items))
@@ -215,6 +216,7 @@ def assert_panel4d_equal(left, right):
for col in right:
assert(col in left)
+
def assert_contains_all(iterable, dic):
for k in iterable:
assert(k in dic)
@@ -332,9 +334,11 @@ def makePanel(nper=None):
data = dict((c, makeTimeDataFrame(nper)) for c in cols)
return Panel.fromDict(data)
+
def makePanel4D(nper=None):
- return Panel4D(dict(l1 = makePanel(nper), l2 = makePanel(nper),
- l3 = makePanel(nper)))
+ return Panel4D(dict(l1=makePanel(nper), l2=makePanel(nper),
+ l3=makePanel(nper)))
+
def makeCustomIndex(nentries, nlevels, prefix='#', names=False, ndupe_l=None,
idx_type=None):
@@ -362,15 +366,15 @@ def makeCustomIndex(nentries, nlevels, prefix='#', names=False, ndupe_l=None,
if ndupe_l is None:
ndupe_l = [1] * nentries
assert len(ndupe_l) <= nentries
- assert names is None or names == False or names == True or len(names) \
- == nlevels
+ assert (names is None or names is False
+ or names is True or len(names) is nlevels)
assert idx_type is None or \
- (idx_type in ('i', 'f', 's', 'u', 'dt') and nlevels == 1)
+ (idx_type in ('i', 'f', 's', 'u', 'dt') and nlevels == 1)
- if names == True:
+ if names is True:
# build default names
names = [prefix + str(i) for i in range(nlevels)]
- if names == False:
+ if names is False:
# pass None to index constructor for no name
names = None
@@ -399,7 +403,7 @@ def makeCustomIndex(nentries, nlevels, prefix='#', names=False, ndupe_l=None,
tuples = []
for i in range(nlevels):
- #build a list of lists to create the index from
+ # build a list of lists to create the index from
div_factor = nentries // ndupe_l[i] + 1
cnt = Counter()
for j in range(div_factor):
@@ -409,7 +413,7 @@ def makeCustomIndex(nentries, nlevels, prefix='#', names=False, ndupe_l=None,
result = list(sorted(cnt.elements()))[:nentries]
tuples.append(result)
- tuples=zip(*tuples)
+ tuples = zip(*tuples)
# convert tuples to index
if nentries == 1:
@@ -418,6 +422,7 @@ def makeCustomIndex(nentries, nlevels, prefix='#', names=False, ndupe_l=None,
index = MultiIndex.from_tuples(tuples, names=names)
return index
+
def makeCustomDataframe(nrows, ncols, c_idx_names=True, r_idx_names=True,
c_idx_nlevels=1, r_idx_nlevels=1, data_gen_f=None,
c_ndupe_l=None, r_ndupe_l=None, dtype=None,
@@ -476,9 +481,9 @@ def makeCustomDataframe(nrows, ncols, c_idx_names=True, r_idx_names=True,
assert c_idx_nlevels > 0
assert r_idx_nlevels > 0
assert r_idx_type is None or \
- (r_idx_type in ('i', 'f', 's', 'u', 'dt') and r_idx_nlevels == 1)
+ (r_idx_type in ('i', 'f', 's', 'u', 'dt') and r_idx_nlevels == 1)
assert c_idx_type is None or \
- (c_idx_type in ('i', 'f', 's', 'u', 'dt') and c_idx_nlevels == 1)
+ (c_idx_type in ('i', 'f', 's', 'u', 'dt') and c_idx_nlevels == 1)
columns = makeCustomIndex(ncols, nlevels=c_idx_nlevels, prefix='C',
names=c_idx_names, ndupe_l=c_ndupe_l,
@@ -489,12 +494,13 @@ def makeCustomDataframe(nrows, ncols, c_idx_names=True, r_idx_names=True,
# by default, generate data based on location
if data_gen_f is None:
- data_gen_f = lambda r, c: "R%dC%d" % (r,c)
+ data_gen_f = lambda r, c: "R%dC%d" % (r, c)
data = [[data_gen_f(r, c) for c in range(ncols)] for r in range(nrows)]
return DataFrame(data, index, columns, dtype=dtype)
+
def add_nans(panel):
I, J, N = panel.shape
for i, item in enumerate(panel.items):
@@ -502,11 +508,13 @@ def add_nans(panel):
for j, col in enumerate(dm.columns):
dm[col][:i + j] = np.NaN
+
def add_nans_panel4d(panel4d):
for l, label in enumerate(panel4d.labels):
panel = panel4d[label]
add_nans(panel)
+
class TestSubDict(dict):
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
diff --git a/scripts/bench_join.py b/scripts/bench_join.py
index d838cf9387269..be24dac810aee 100644
--- a/scripts/bench_join.py
+++ b/scripts/bench_join.py
@@ -9,10 +9,11 @@
pct_overlap = 0.2
a = np.arange(n, dtype=np.int64)
-b = np.arange(n * pct_overlap, n*(1+pct_overlap), dtype=np.int64)
+b = np.arange(n * pct_overlap, n * (1 + pct_overlap), dtype=np.int64)
dr1 = DateRange('1/1/2000', periods=n, offset=datetools.Minute())
-dr2 = DateRange(dr1[int(pct_overlap*n)], periods=n, offset=datetools.Minute(2))
+dr2 = DateRange(
+ dr1[int(pct_overlap * n)], periods=n, offset=datetools.Minute(2))
aobj = a.astype(object)
bobj = b.astype(object)
@@ -29,11 +30,13 @@
a_frame = DataFrame(avf, index=a, columns=range(K))
b_frame = DataFrame(bvf, index=b, columns=range(K, 2 * K))
+
def do_left_join(a, b, av, bv):
out = np.empty((len(a), 2))
lib.left_join_1d(a, b, av, bv, out)
return out
+
def do_outer_join(a, b, av, bv):
result_index, aindexer, bindexer = lib.outer_join_indexer(a, b)
result = np.empty((2, len(result_index)))
@@ -41,6 +44,7 @@ def do_outer_join(a, b, av, bv):
lib.take_1d(bv, bindexer, result[1])
return result_index, result
+
def do_inner_join(a, b, av, bv):
result_index, aindexer, bindexer = lib.inner_join_indexer(a, b)
result = np.empty((2, len(result_index)))
@@ -53,6 +57,7 @@ def do_inner_join(a, b, av, bv):
from pandas.util.testing import set_trace
+
def do_left_join_python(a, b, av, bv):
indexer, mask = lib.ordered_left_join_int64(a, b)
@@ -68,12 +73,14 @@ def do_left_join_python(a, b, av, bv):
np.putmask(bchunk, np.tile(mask, bk), np.nan)
return result
+
def _take_multi(data, indexer, out):
if not data.flags.c_contiguous:
data = data.copy()
for i in xrange(data.shape[0]):
data[i].take(indexer, out=out[i])
+
def do_left_join_multi(a, b, av, bv):
n, ak = av.shape
_, bk = bv.shape
@@ -81,6 +88,7 @@ def do_left_join_multi(a, b, av, bv):
lib.left_join_2d(a, b, av, bv, result)
return result
+
def do_outer_join_multi(a, b, av, bv):
n, ak = av.shape
_, bk = bv.shape
@@ -92,6 +100,7 @@ def do_outer_join_multi(a, b, av, bv):
# lib.take_axis0(bv, lindexer, out=result[ak:].T)
return result_index, result
+
def do_inner_join_multi(a, b, av, bv):
n, ak = av.shape
_, bk = bv.shape
@@ -103,6 +112,7 @@ def do_inner_join_multi(a, b, av, bv):
# lib.take_axis0(bv, lindexer, out=result[ak:].T)
return result_index, result
+
def do_left_join_multi_v2(a, b, av, bv):
indexer, mask = lib.ordered_left_join_int64(a, b)
bv_taken = bv.take(indexer, axis=0)
@@ -113,6 +123,7 @@ def do_left_join_multi_v2(a, b, av, bv):
def do_left_join_series(a, b):
return b.reindex(a.index)
+
def do_left_join_frame(a, b):
a.index._indexMap = None
b.index._indexMap = None
@@ -125,14 +136,16 @@ def do_left_join_frame(a, b):
out = np.empty((10, 120000))
+
def join(a, b, av, bv, how="left"):
- func_dict = {'left' : do_left_join_multi,
- 'outer' : do_outer_join_multi,
- 'inner' : do_inner_join_multi}
+ func_dict = {'left': do_left_join_multi,
+ 'outer': do_outer_join_multi,
+ 'inner': do_inner_join_multi}
f = func_dict[how]
return f(a, b, av, bv)
+
def bench_python(n=100000, pct_overlap=0.20, K=1):
import gc
ns = [2, 3, 4, 5, 6]
@@ -142,7 +155,7 @@ def bench_python(n=100000, pct_overlap=0.20, K=1):
all_results = {}
for logn in ns:
- n = 10**logn
+ n = 10 ** logn
a = np.arange(n, dtype=np.int64)
b = np.arange(n * pct_overlap, n * pct_overlap + n, dtype=np.int64)
@@ -171,6 +184,7 @@ def bench_python(n=100000, pct_overlap=0.20, K=1):
return DataFrame(all_results, index=kinds)
+
def bench_xts(n=100000, pct_overlap=0.20):
from pandas.rpy.common import r
r('a <- 5')
@@ -194,4 +208,3 @@ def bench_xts(n=100000, pct_overlap=0.20):
elapsed = r('as.list(system.time(%s, gcFirst=F))$elapsed' % stmt)[0]
result[kind] = (elapsed / iterations) * 1000
return Series(result)
-
diff --git a/scripts/bench_join_multi.py b/scripts/bench_join_multi.py
index a0babaf59b990..cdac37f289bb8 100644
--- a/scripts/bench_join_multi.py
+++ b/scripts/bench_join_multi.py
@@ -12,19 +12,21 @@
zipped = izip(key1, key2)
+
def _zip(*args):
arr = np.empty(N, dtype=object)
arr[:] = zip(*args)
return arr
+
def _zip2(*args):
return lib.list_to_object_array(zip(*args))
index = MultiIndex.from_arrays([key1, key2])
-to_join = DataFrame({'j1' : np.random.randn(100000)}, index=index)
+to_join = DataFrame({'j1': np.random.randn(100000)}, index=index)
-data = DataFrame({'A' : np.random.randn(500000),
- 'key1' : np.repeat(key1, 5),
- 'key2' : np.repeat(key2, 5)})
+data = DataFrame({'A': np.random.randn(500000),
+ 'key1': np.repeat(key1, 5),
+ 'key2': np.repeat(key2, 5)})
# data.join(to_join, on=['key1', 'key2'])
diff --git a/scripts/bench_refactor.py b/scripts/bench_refactor.py
index 5ae36f7ddd058..3d0c7e40ced7d 100644
--- a/scripts/bench_refactor.py
+++ b/scripts/bench_refactor.py
@@ -11,6 +11,7 @@
N = 1000
K = 500
+
def horribly_unconsolidated():
index = np.arange(N)
@@ -21,16 +22,19 @@ def horribly_unconsolidated():
return df
+
def bench_reindex_index(df, it=100):
new_idx = np.arange(0, N, 2)
for i in xrange(it):
df.reindex(new_idx)
+
def bench_reindex_columns(df, it=100):
new_cols = np.arange(0, K, 2)
for i in xrange(it):
df.reindex(columns=new_cols)
+
def bench_join_index(df, it=10):
left = df.reindex(index=np.arange(0, N, 2),
columns=np.arange(K // 2))
diff --git a/scripts/faster_xs.py b/scripts/faster_xs.py
index a539642b78185..2bb6271124c4f 100644
--- a/scripts/faster_xs.py
+++ b/scripts/faster_xs.py
@@ -13,4 +13,3 @@
blocks = df._data.blocks
items = df.columns
-
diff --git a/scripts/file_sizes.py b/scripts/file_sizes.py
index edbd23c80533d..8720730d2bb10 100644
--- a/scripts/file_sizes.py
+++ b/scripts/file_sizes.py
@@ -17,9 +17,11 @@
loc = '.'
walked = os.walk(loc)
+
def _should_count_file(path):
return path.endswith('.py') or path.endswith('.pyx')
+
def _is_def_line(line):
"""def/cdef/cpdef, but not `cdef class`"""
return (line.endswith(':') and not 'class' in line.split() and
@@ -28,6 +30,7 @@ def _is_def_line(line):
line.startswith('cpdef ') or
' def ' in line or ' cdef ' in line or ' cpdef ' in line))
+
class LengthCounter(object):
"""
should add option for subtracting nested function lengths??
@@ -100,15 +103,18 @@ def _push_count(self, start_pos):
self.counts.append(len(func_lines))
+
def _get_indent_level(line):
level = 0
while line.startswith(' ' * level):
level += 1
return level
+
def _is_triplequote(line):
return line.startswith('"""') or line.startswith("'''")
+
def _get_file_function_lengths(path):
lines = [x.rstrip() for x in open(path).readlines()]
counter = LengthCounter(lines)
@@ -145,6 +151,7 @@ def x():
result = counter.get_counts()
assert(result == expected)
+
def doit():
for directory, _, files in walked:
print directory
@@ -160,8 +167,9 @@ def doit():
names.append(path)
lengths.append(lines)
- result = DataFrame({'dirs' : dirs, 'names' : names,
- 'lengths' : lengths})
+ result = DataFrame({'dirs': dirs, 'names': names,
+ 'lengths': lengths})
+
def doit2():
counts = {}
diff --git a/scripts/gen_release_notes.py b/scripts/gen_release_notes.py
index 4c12a43320923..c2ebbc88ed580 100644
--- a/scripts/gen_release_notes.py
+++ b/scripts/gen_release_notes.py
@@ -15,9 +15,10 @@ def __eq__(self, other):
return self.number == other.number
return False
+
class Issue(object):
- def __init__(self, title, labels, number, milestone, body, state):
+ def __init__(self, title, labels, number, milestone, body, state):
self.title = title
self.labels = set([x['name'] for x in labels])
self.number = number
@@ -30,6 +31,7 @@ def __eq__(self, other):
return self.number == other.number
return False
+
def get_issues():
all_issues = []
page_number = 1
@@ -41,6 +43,7 @@ def get_issues():
all_issues.extend(iss)
return all_issues
+
def _get_page(page_number):
gh_url = ('https://api.github.com/repos/pydata/pandas/issues?'
'milestone=*&state=closed&assignee=*&page=%d') % page_number
@@ -52,11 +55,13 @@ def _get_page(page_number):
for x in jsondata]
return issues
+
def get_milestone(data):
if data is None:
return None
return Milestone(data['title'], data['number'])
+
def collate_label(issues, label):
lines = []
for x in issues:
@@ -65,6 +70,7 @@ def collate_label(issues, label):
return '\n'.join(lines)
+
def release_notes(milestone):
issues = get_issues()
diff --git a/scripts/groupby_sample.py b/scripts/groupby_sample.py
index 63638ede83097..8685b2bbe8ff7 100644
--- a/scripts/groupby_sample.py
+++ b/scripts/groupby_sample.py
@@ -4,42 +4,46 @@
g1 = np.array(list(string.letters))[:-1]
g2 = np.arange(510)
-df_small = DataFrame({'group1' : ["a","b","a","a","b","c","c","c","c",
- "c","a","a","a","b","b","b","b"],
- 'group2' : [1,2,3,4,1,3,5,6,5,4,1,2,3,4,3,2,1],
- 'value' : ["apple","pear","orange","apple",
- "banana","durian","lemon","lime",
- "raspberry","durian","peach","nectarine",
- "banana","lemon","guava","blackberry",
- "grape"]})
+df_small = DataFrame({'group1': ["a", "b", "a", "a", "b", "c", "c", "c", "c",
+ "c", "a", "a", "a", "b", "b", "b", "b"],
+ 'group2': [1, 2, 3, 4, 1, 3, 5, 6, 5, 4, 1, 2, 3, 4, 3, 2, 1],
+ 'value': ["apple", "pear", "orange", "apple",
+ "banana", "durian", "lemon", "lime",
+ "raspberry", "durian", "peach", "nectarine",
+ "banana", "lemon", "guava", "blackberry",
+ "grape"]})
value = df_small['value'].values.repeat(3)
-df = DataFrame({'group1' : g1.repeat(4000 * 5),
- 'group2' : np.tile(g2, 400 * 5),
- 'value' : value.repeat(4000 * 5)})
+df = DataFrame({'group1': g1.repeat(4000 * 5),
+ 'group2': np.tile(g2, 400 * 5),
+ 'value': value.repeat(4000 * 5)})
def random_sample():
- grouped = df.groupby(['group1','group2'])['value']
+ grouped = df.groupby(['group1', 'group2'])['value']
from random import choice
choose = lambda group: choice(group.index)
indices = grouped.apply(choose)
return df.reindex(indices)
+
def random_sample_v2():
- grouped = df.groupby(['group1','group2'])['value']
+ grouped = df.groupby(['group1', 'group2'])['value']
from random import choice
choose = lambda group: choice(group.index)
indices = [choice(v) for k, v in grouped.groups.iteritems()]
return df.reindex(indices)
+
def do_shuffle(arr):
- from random import shuffle
- result = arr.copy().values
- shuffle(result)
- return result
+ from random import shuffle
+ result = arr.copy().values
+ shuffle(result)
+ return result
+
-def shuffle_uri(df,grouped):
- perm = np.r_[tuple([np.random.permutation(idxs) for idxs in grouped.groups.itervalues()])]
+def shuffle_uri(df, grouped):
+ perm = np.r_[tuple([np.random.permutation(
+ idxs) for idxs in grouped.groups.itervalues()])]
df['state_permuted'] = np.asarray(df.ix[perm]['value'])
df2 = df.copy()
diff --git a/scripts/groupby_speed.py b/scripts/groupby_speed.py
index c0fa44957d149..a25b00206733d 100644
--- a/scripts/groupby_speed.py
+++ b/scripts/groupby_speed.py
@@ -9,23 +9,26 @@
gp = rng5.asof
grouped = df.groupby(gp)
+
def get1(dt):
k = gp(dt)
return grouped.get_group(k)
+
def get2(dt):
k = gp(dt)
return df.ix[grouped.groups[k]]
+
def f():
for i, date in enumerate(df.index):
if i % 10000 == 0:
print i
get1(date)
+
def g():
for i, date in enumerate(df.index):
if i % 10000 == 0:
print i
get2(date)
-
diff --git a/scripts/groupby_test.py b/scripts/groupby_test.py
index 6e4177e2fb0f1..76c9cb0cb3bc5 100644
--- a/scripts/groupby_test.py
+++ b/scripts/groupby_test.py
@@ -105,7 +105,8 @@
# f = lambda x: x
-# transformed = df.groupby(lambda x: x.strftime('%m/%y')).transform(lambda x: x)
+# transformed = df.groupby(lambda x: x.strftime('%m/%y')).transform(lambda
+# x: x)
# def ohlc(group):
# return Series([group[0], group.max(), group.min(), group[-1]],
@@ -133,10 +134,11 @@
b = np.tile(np.arange(100), 100)
index = MultiIndex.from_arrays([a, b])
s = Series(np.random.randn(len(index)), index)
-df = DataFrame({'A' : s})
+df = DataFrame({'A': s})
df['B'] = df.index.get_level_values(0)
df['C'] = df.index.get_level_values(1)
+
def f():
for x in df.groupby(['B', 'B']):
pass
diff --git a/scripts/parser_magic.py b/scripts/parser_magic.py
index 4eec900b880dd..c35611350988c 100644
--- a/scripts/parser_magic.py
+++ b/scripts/parser_magic.py
@@ -6,10 +6,12 @@
import inspect
import sys
+
def merge(a, b):
f, args, _ = parse_stmt(inspect.currentframe().f_back)
- return DataFrame({args[0] : a,
- args[1] : b})
+ return DataFrame({args[0]: a,
+ args[1]: b})
+
def parse_stmt(frame):
info = inspect.getframeinfo(frame)
@@ -22,6 +24,7 @@ def parse_stmt(frame):
call = body
return _parse_call(call)
+
def _parse_call(call):
func = _maybe_format_attribute(call.func)
@@ -35,6 +38,7 @@ def _parse_call(call):
return func, str_args, {}
+
def _format_call(call):
func, args, kwds = _parse_call(call)
content = ''
@@ -49,11 +53,13 @@ def _format_call(call):
content += joined_kwds
return '%s(%s)' % (func, content)
+
def _maybe_format_attribute(name):
if isinstance(name, ast.Attribute):
return _format_attribute(name)
return name.id
+
def _format_attribute(attr):
obj = attr.value
if isinstance(attr.value, ast.Attribute):
diff --git a/scripts/preepoch_test.py b/scripts/preepoch_test.py
index b65f09c8172ff..59066ba832cd0 100644
--- a/scripts/preepoch_test.py
+++ b/scripts/preepoch_test.py
@@ -1,20 +1,21 @@
import numpy as np
from pandas import *
+
def panda_test():
# generate some data
- data = np.random.rand(50,5)
+ data = np.random.rand(50, 5)
# generate some dates
- dates = DateRange('1/1/1969',periods=50)
+ dates = DateRange('1/1/1969', periods=50)
# generate column headings
- cols = ['A','B','C','D','E']
+ cols = ['A', 'B', 'C', 'D', 'E']
- df = DataFrame(data,index=dates,columns=cols)
+ df = DataFrame(data, index=dates, columns=cols)
# save to HDF5Store
store = HDFStore('bugzilla.h5', mode='w')
- store['df'] = df # This gives: OverflowError: mktime argument out of range
+ store['df'] = df # This gives: OverflowError: mktime argument out of range
store.close()
diff --git a/scripts/pypistats.py b/scripts/pypistats.py
index 6054c356f8c0b..e64be63551fde 100644
--- a/scripts/pypistats.py
+++ b/scripts/pypistats.py
@@ -14,6 +14,7 @@
locale.setlocale(locale.LC_ALL, '')
+
class PyPIDownloadAggregator(object):
def __init__(self, package_name, include_hidden=True):
@@ -30,7 +31,8 @@ def releases(self):
self.include_hidden)
if len(result) == 0:
- # no matching package--search for possibles, and limit to 15 results
+ # no matching package--search for possibles, and limit to 15
+ # results
results = self.proxy.search({
'name': self.package_name,
'description': self.package_name
diff --git a/scripts/roll_median_leak.py b/scripts/roll_median_leak.py
index b7e412390a22f..6441a69f3a8bf 100644
--- a/scripts/roll_median_leak.py
+++ b/scripts/roll_median_leak.py
@@ -20,5 +20,5 @@
for _ in xrange(10000):
print proc.get_memory_info()
- sdf = SparseDataFrame({'A' : lst.to_array()})
+ sdf = SparseDataFrame({'A': lst.to_array()})
chunk = sdf[sdf['A'] == 5]
diff --git a/scripts/runtests.py b/scripts/runtests.py
index 7816ac25db9d2..b995db65ac591 100644
--- a/scripts/runtests.py
+++ b/scripts/runtests.py
@@ -1,3 +1,4 @@
-import os; print os.getpid()
+import os
+print os.getpid()
import nose
nose.main('pandas.core')
diff --git a/scripts/testmed.py b/scripts/testmed.py
index 1184fee82efd3..ed0f76cd2f3fb 100644
--- a/scripts/testmed.py
+++ b/scripts/testmed.py
@@ -3,11 +3,14 @@
from random import random
from math import log, ceil
+
class Node(object):
__slots__ = 'value', 'next', 'width'
+
def __init__(self, value, next, width):
self.value, self.next, self.width = value, next, width
+
class End(object):
'Sentinel object that always compares greater than another object'
def __cmp__(self, other):
@@ -15,13 +18,14 @@ def __cmp__(self, other):
NIL = Node(End(), [], []) # Singleton terminator node
+
class IndexableSkiplist:
'Sorted collection supporting O(lg n) insertion, removal, and lookup by rank.'
def __init__(self, expected_size=100):
self.size = 0
self.maxlevels = int(1 + log(expected_size, 2))
- self.head = Node('HEAD', [NIL]*self.maxlevels, [1]*self.maxlevels)
+ self.head = Node('HEAD', [NIL] * self.maxlevels, [1] * self.maxlevels)
def __len__(self):
return self.size
@@ -48,7 +52,7 @@ def insert(self, value):
# insert a link to the newnode at each level
d = min(self.maxlevels, 1 - int(log(random(), 2.0)))
- newnode = Node(value, [None]*d, [None]*d)
+ newnode = Node(value, [None] * d, [None] * d)
steps = 0
for level in range(d):
prevnode = chain[level]
@@ -92,6 +96,7 @@ def __iter__(self):
from collections import deque
from itertools import islice
+
class RunningMedian:
'Fast running median with O(lg n) updates where n is the window size'
@@ -121,6 +126,7 @@ def __iter__(self):
import time
+
def test():
from numpy.random import randn
@@ -135,12 +141,14 @@ def _test(arr, k):
from numpy.random import randn
from pandas.lib.skiplist import rolling_median
+
def test2():
arr = randn(N)
return rolling_median(arr, K)
+
def runmany(f, arr, arglist):
timings = []
@@ -152,6 +160,7 @@ def runmany(f, arr, arglist):
return timings
+
def _time(f, *args):
_start = time.clock()
result = f(*args)
diff --git a/scripts/touchup_gh_issues.py b/scripts/touchup_gh_issues.py
index 924a236f629f6..96ee220f55a02 100755
--- a/scripts/touchup_gh_issues.py
+++ b/scripts/touchup_gh_issues.py
@@ -14,8 +14,10 @@
pat = "((?:\s*GH\s*)?)#(\d{3,4})([^_]|$)?"
rep_pat = r"\1GH\2_\3"
-anchor_pat =".. _GH{id}: https://github.com/pydata/pandas/issues/{id}"
+anchor_pat = ".. _GH{id}: https://github.com/pydata/pandas/issues/{id}"
section_pat = "^pandas\s[\d\.]+\s*$"
+
+
def main():
issues = OrderedDict()
while True:
@@ -24,19 +26,19 @@ def main():
if not line:
break
- if re.search(section_pat,line):
+ if re.search(section_pat, line):
for id in issues:
print(anchor_pat.format(id=id).rstrip())
if issues:
print("\n")
issues = OrderedDict()
- for m in re.finditer(pat,line):
+ for m in re.finditer(pat, line):
id = m.group(2)
if id not in issues:
issues[id] = True
- print(re.sub(pat, rep_pat,line).rstrip())
+ print(re.sub(pat, rep_pat, line).rstrip())
pass
if __name__ == "__main__":
- main()
+ main()
diff --git a/setup.py b/setup.py
index 8b9b6665b381f..eeb6cd3e871a8 100755
--- a/setup.py
+++ b/setup.py
@@ -12,18 +12,18 @@
import warnings
try:
- BUILD_CACHE_DIR=None
+ BUILD_CACHE_DIR = None
# uncomment to activate the build cache
- #BUILD_CACHE_DIR="/tmp/.pandas_build_cache/"
+ # BUILD_CACHE_DIR="/tmp/.pandas_build_cache/"
if os.isdir(BUILD_CACHE_DIR):
print("--------------------------------------------------------")
print("BUILD CACHE ACTIVATED. be careful, this is experimental.")
print("--------------------------------------------------------")
else:
- BUILD_CACHE_DIR=None
-except :
+ BUILD_CACHE_DIR = None
+except:
pass
# may need to work around setuptools bug by providing a fake Pyrex
@@ -52,7 +52,7 @@
if sys.version_info[0] >= 3:
min_numpy_ver = 1.6
- if sys.version_info[1] >= 3: # 3.3 needs numpy 1.7+
+ if sys.version_info[1] >= 3: # 3.3 needs numpy 1.7+
min_numpy_ver = "1.7.0b2"
setuptools_kwargs = {'use_2to3': True,
@@ -61,18 +61,18 @@
'pytz',
'numpy >= %s' % min_numpy_ver],
'use_2to3_exclude_fixers': ['lib2to3.fixes.fix_next',
- ],
- }
+ ],
+ }
if not _have_setuptools:
sys.exit("need setuptools/distribute for Py3k"
- "\n$ pip install distribute")
+ "\n$ pip install distribute")
else:
setuptools_kwargs = {
'install_requires': ['python-dateutil',
'pytz',
'numpy >= 1.6.1'],
- 'zip_safe' : False,
+ 'zip_safe': False,
}
if not _have_setuptools:
@@ -89,7 +89,7 @@
import numpy as np
except ImportError:
nonumpy_msg = ("# numpy needed to finish setup. run:\n\n"
- " $ pip install numpy # or easy_install numpy\n")
+ " $ pip install numpy # or easy_install numpy\n")
sys.exit(nonumpy_msg)
if np.__version__ < '1.6.1':
@@ -103,10 +103,10 @@
try:
from Cython.Distutils import build_ext
- #from Cython.Distutils import Extension # to get pyrex debugging symbols
- cython=True
+ # from Cython.Distutils import Extension # to get pyrex debugging symbols
+ cython = True
except ImportError:
- cython=False
+ cython = False
from os.path import splitext, basename, join as pjoin
@@ -215,8 +215,9 @@
stdout=subprocess.PIPE).stdout
except OSError:
# msysgit compatibility
- pipe = subprocess.Popen(["git.cmd", "rev-parse", "--short", "HEAD"],
- stdout=subprocess.PIPE).stdout
+ pipe = subprocess.Popen(
+ ["git.cmd", "rev-parse", "--short", "HEAD"],
+ stdout=subprocess.PIPE).stdout
rev = pipe.read().strip()
# makes distutils blow up on Python 2.7
if sys.version_info[0] >= 3:
@@ -228,13 +229,15 @@
else:
FULLVERSION += QUALIFIER
+
def write_version_py(filename=None):
cnt = """\
version = '%s'
short_version = '%s'
"""
if not filename:
- filename = os.path.join(os.path.dirname(__file__), 'pandas', 'version.py')
+ filename = os.path.join(
+ os.path.dirname(__file__), 'pandas', 'version.py')
a = open(filename, 'w')
try:
@@ -242,10 +245,11 @@ def write_version_py(filename=None):
finally:
a.close()
+
class CleanCommand(Command):
"""Custom distutils command to clean the .so and .pyc files."""
- user_options = [("all", "a", "") ]
+ user_options = [("all", "a", "")]
def initialize_options(self):
self.all = True
@@ -287,6 +291,7 @@ def run(self):
except Exception:
pass
+
class CheckSDist(sdist):
"""Custom sdist that ensures Cython has compiled all pyx files to c."""
@@ -314,20 +319,21 @@ def run(self):
self.run_command('cython')
else:
for pyxfile in self._pyxfiles:
- cfile = pyxfile[:-3]+'c'
- msg = "C-source file '%s' not found."%(cfile)+\
- " Run 'setup.py cython' before sdist."
+ cfile = pyxfile[:-3] + 'c'
+ msg = "C-source file '%s' not found." % (cfile) +\
+ " Run 'setup.py cython' before sdist."
assert os.path.isfile(cfile), msg
sdist.run(self)
+
class CheckingBuildExt(build_ext):
"""Subclass build_ext to get clearer report if Cython is necessary."""
def check_cython_extensions(self, extensions):
for ext in extensions:
- for src in ext.sources:
- if not os.path.exists(src):
- raise Exception("""Cython-generated file '%s' not found.
+ for src in ext.sources:
+ if not os.path.exists(src):
+ raise Exception("""Cython-generated file '%s' not found.
Cython is required to compile pandas from a development branch.
Please install Cython or download a release package of pandas.
""" % src)
@@ -339,39 +345,41 @@ def build_extensions(self):
for ext in self.extensions:
self.build_extension(ext)
+
class CompilationCacheMixin(object):
- def __init__(self,*args,**kwds):
- cache_dir=kwds.pop("cache_dir",BUILD_CACHE_DIR)
- self.cache_dir=cache_dir
+ def __init__(self, *args, **kwds):
+ cache_dir = kwds.pop("cache_dir", BUILD_CACHE_DIR)
+ self.cache_dir = cache_dir
if not os.path.isdir(cache_dir):
- raise Exception("Error: path to Cache directory (%s) is not a dir" % cache_dir);
+ raise Exception("Error: path to Cache directory (%s) is not a dir" % cache_dir)
- def _copy_from_cache(self,hash,target):
- src=os.path.join(self.cache_dir,hash)
+ def _copy_from_cache(self, hash, target):
+ src = os.path.join(self.cache_dir, hash)
if os.path.exists(src):
- # print("Cache HIT: asked to copy file %s in %s" % (src,os.path.abspath(target)))
- s="."
+ # print("Cache HIT: asked to copy file %s in %s" %
+ # (src,os.path.abspath(target)))
+ s = "."
for d in target.split(os.path.sep)[:-1]:
- s=os.path.join(s,d)
+ s = os.path.join(s, d)
if not os.path.exists(s):
os.mkdir(s)
- shutil.copyfile(src,target)
+ shutil.copyfile(src, target)
return True
return False
- def _put_to_cache(self,hash,src):
- target=os.path.join(self.cache_dir,hash)
+ def _put_to_cache(self, hash, src):
+ target = os.path.join(self.cache_dir, hash)
# print( "Cache miss: asked to copy file from %s to %s" % (src,target))
- s="."
+ s = "."
for d in target.split(os.path.sep)[:-1]:
- s=os.path.join(s,d)
+ s = os.path.join(s, d)
if not os.path.exists(s):
os.mkdir(s)
- shutil.copyfile(src,target)
+ shutil.copyfile(src, target)
- def _hash_obj(self,obj):
+ def _hash_obj(self, obj):
"""
you should override this method to provide a sensible
implementation of hashing functions for your intended objects
@@ -390,7 +398,7 @@ def get_ext_fullpath(self, ext_name):
"""
import string
# makes sure the extension name is only using dots
- all_dots = string.maketrans('/'+os.sep, '..')
+ all_dots = string.maketrans('/' + os.sep, '..')
ext_name = ext_name.translate(all_dots)
fullname = self.get_ext_fullname(ext_name)
@@ -402,7 +410,7 @@ def get_ext_fullpath(self, ext_name):
# no further work needed
# returning :
# build_dir/package/path/filename
- filename = os.path.join(*modpath[:-1]+[filename])
+ filename = os.path.join(*modpath[:-1] + [filename])
return os.path.join(self.build_lib, filename)
# the inplace option requires to find the package directory
@@ -415,23 +423,24 @@ def get_ext_fullpath(self, ext_name):
# package_dir/filename
return os.path.join(package_dir, filename)
+
class CompilationCacheExtMixin(CompilationCacheMixin):
- def __init__(self,*args,**kwds):
- CompilationCacheMixin.__init__(self,*args,**kwds)
+ def __init__(self, *args, **kwds):
+ CompilationCacheMixin.__init__(self, *args, **kwds)
- def _hash_file(self,fname):
+ def _hash_file(self, fname):
from hashlib import sha1
try:
- hash=sha1()
+ hash = sha1()
hash.update(self.build_lib.encode('utf-8'))
try:
if sys.version_info[0] >= 3:
import io
- f=io.open(fname,"rb")
+ f = io.open(fname, "rb")
else:
- f=open(fname)
+ f = open(fname)
- first_line=f.readline()
+ first_line = f.readline()
# ignore cython generation timestamp header
if "Generated by Cython" not in first_line.decode('utf-8'):
hash.update(first_line)
@@ -447,21 +456,21 @@ def _hash_file(self,fname):
except IOError:
return None
- def _hash_obj(self,ext):
+ def _hash_obj(self, ext):
from hashlib import sha1
sources = ext.sources
- if sources is None or \
- (not hasattr(sources,'__iter__') ) or \
- isinstance(sources,str) or \
- sys.version[0]==2 and isinstance(sources,unicode): #argh
+ if (sources is None or
+ (not hasattr(sources, '__iter__')) or
+ isinstance(sources, str) or
+ sys.version[0] == 2 and isinstance(sources, unicode)): # argh
return False
sources = list(sources) + ext.depends
- hash=sha1()
+ hash = sha1()
try:
for fname in sources:
- fhash=self._hash_file(fname)
+ fhash = self._hash_file(fname)
if fhash:
hash.update(fhash.encode('utf-8'))
except:
@@ -469,53 +478,54 @@ def _hash_obj(self,ext):
return hash.hexdigest()
-class CachingBuildExt(build_ext,CompilationCacheExtMixin):
- def __init__(self,*args,**kwds):
- CompilationCacheExtMixin.__init__(self,*args,**kwds)
- kwds.pop("cache_dir",None)
- build_ext.__init__(self,*args,**kwds)
- def build_extension(self, ext,*args,**kwds):
+class CachingBuildExt(build_ext, CompilationCacheExtMixin):
+ def __init__(self, *args, **kwds):
+ CompilationCacheExtMixin.__init__(self, *args, **kwds)
+ kwds.pop("cache_dir", None)
+ build_ext.__init__(self, *args, **kwds)
+
+ def build_extension(self, ext, *args, **kwds):
ext_path = self.get_ext_fullpath(ext.name)
- build_path = os.path.join(self.build_lib,os.path.basename(ext_path))
+ build_path = os.path.join(self.build_lib, os.path.basename(ext_path))
- hash=self._hash_obj(ext)
- if hash and self._copy_from_cache(hash,ext_path):
+ hash = self._hash_obj(ext)
+ if hash and self._copy_from_cache(hash, ext_path):
return
- build_ext.build_extension(self,ext,*args,**kwds)
+ build_ext.build_extension(self, ext, *args, **kwds)
- hash=self._hash_obj(ext)
+ hash = self._hash_obj(ext)
if os.path.exists(build_path):
- self._put_to_cache(hash,build_path) # build_ext
+ self._put_to_cache(hash, build_path) # build_ext
if os.path.exists(ext_path):
- self._put_to_cache(hash,ext_path) # develop
-
+ self._put_to_cache(hash, ext_path) # develop
def cython_sources(self, sources, extension):
import re
cplus = self.cython_cplus or getattr(extension, 'cython_cplus', 0) or \
- (extension.language and extension.language.lower() == 'c++')
+ (extension.language and extension.language.lower() == 'c++')
target_ext = '.c'
if cplus:
target_ext = '.cpp'
- for i,s in enumerate(sources):
- if not re.search("\.(pyx|pxi|pxd)$",s):
+ for i, s in enumerate(sources):
+ if not re.search("\.(pyx|pxi|pxd)$", s):
continue
- ext_dir=os.path.dirname(s)
- ext_basename=re.sub("\.[^\.]+$","",os.path.basename(s))
- ext_basename += target_ext
- target= os.path.join(ext_dir,ext_basename)
- hash=self._hash_file(s)
- sources[i]=target
- if hash and self._copy_from_cache(hash,target):
+ ext_dir = os.path.dirname(s)
+ ext_basename = re.sub("\.[^\.]+$", "", os.path.basename(s))
+ ext_basename += target_ext
+ target = os.path.join(ext_dir, ext_basename)
+ hash = self._hash_file(s)
+ sources[i] = target
+ if hash and self._copy_from_cache(hash, target):
continue
- build_ext.cython_sources(self,[s],extension)
- self._put_to_cache(hash,target)
+ build_ext.cython_sources(self, [s], extension)
+ self._put_to_cache(hash, target)
return sources
+
class CythonCommand(build_ext):
"""Custom distutils command subclassed from Cython.Distutils.build_ext
to compile pyx->c, and stop there. All this does is override the
@@ -523,14 +533,18 @@ class CythonCommand(build_ext):
def build_extension(self, ext):
pass
+
class DummyBuildSrc(Command):
""" numpy's build_src command interferes with Cython's build_ext.
"""
user_options = []
+
def initialize_options(self):
self.py_modules_dict = {}
+
def finalize_options(self):
pass
+
def run(self):
pass
@@ -541,18 +555,19 @@ def run(self):
if cython:
suffix = '.pyx'
cmdclass['build_ext'] = build_ext
- if BUILD_CACHE_DIR: # use the cache
+ if BUILD_CACHE_DIR: # use the cache
cmdclass['build_ext'] = CachingBuildExt
cmdclass['cython'] = CythonCommand
else:
suffix = '.c'
cmdclass['build_src'] = DummyBuildSrc
- cmdclass['build_ext'] = build_ext
+ cmdclass['build_ext'] = build_ext
lib_depends = ['reduce', 'inference', 'properties']
+
def srcpath(name=None, suffix='.pyx', subdir='src'):
- return pjoin('pandas', subdir, name+suffix)
+ return pjoin('pandas', subdir, name + suffix)
if suffix == '.pyx':
lib_depends = [srcpath(f, suffix='.pyx') for f in lib_depends]
@@ -563,8 +578,9 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
common_include = [np.get_include(), 'pandas/src/klib', 'pandas/src']
+
def pxd(name):
- return os.path.abspath(pjoin('pandas', name+'.pxd'))
+ return os.path.abspath(pjoin('pandas', name + '.pxd'))
lib_depends = lib_depends + ['pandas/src/numpy_helper.h',
@@ -681,23 +697,23 @@ def pxd(name):
'pandas.io.tests',
'pandas.stats.tests',
],
- package_data={'pandas.io' : ['tests/*.h5',
- 'tests/*.csv',
- 'tests/*.txt',
- 'tests/*.xls',
- 'tests/*.xlsx',
- 'tests/*.table'],
+ package_data={'pandas.io': ['tests/*.h5',
+ 'tests/*.csv',
+ 'tests/*.txt',
+ 'tests/*.xls',
+ 'tests/*.xlsx',
+ 'tests/*.table'],
'pandas.tools': ['tests/*.csv'],
- 'pandas.tests' : ['data/*.pickle',
- 'data/*.csv'],
- 'pandas.tseries.tests' : ['data/*.pickle',
- 'data/*.csv']
- },
+ 'pandas.tests': ['data/*.pickle',
+ 'data/*.csv'],
+ 'pandas.tseries.tests': ['data/*.pickle',
+ 'data/*.csv']
+ },
ext_modules=extensions,
maintainer_email=EMAIL,
description=DESCRIPTION,
license=LICENSE,
- cmdclass = cmdclass,
+ cmdclass=cmdclass,
url=URL,
download_url=DOWNLOAD_URL,
long_description=LONG_DESCRIPTION,
diff --git a/vb_suite/groupby.py b/vb_suite/groupby.py
index 61bf7aa070e1f..cd14b3b5f383b 100644
--- a/vb_suite/groupby.py
+++ b/vb_suite/groupby.py
@@ -55,9 +55,10 @@ def f():
df = DataFrame(randn(1000, 1000))
"""
-groupby_frame_cython_many_columns = Benchmark('df.groupby(labels).sum()', setup,
- start_date=datetime(2011, 8, 1),
- logy=True)
+groupby_frame_cython_many_columns = Benchmark(
+ 'df.groupby(labels).sum()', setup,
+ start_date=datetime(2011, 8, 1),
+ logy=True)
#----------------------------------------------------------------------
# single key, long, integer key
@@ -183,7 +184,7 @@ def f():
start_date=datetime(2012, 5, 1))
groupby_last = Benchmark('data.groupby(labels).last()', setup,
- start_date=datetime(2012, 5, 1))
+ start_date=datetime(2012, 5, 1))
#----------------------------------------------------------------------
diff --git a/vb_suite/hdfstore_bench.py b/vb_suite/hdfstore_bench.py
index 23303f335af7e..8f66cc04a5ec9 100644
--- a/vb_suite/hdfstore_bench.py
+++ b/vb_suite/hdfstore_bench.py
@@ -1,7 +1,7 @@
from vbench.api import Benchmark
from datetime import datetime
-start_date = datetime(2012,7,1)
+start_date = datetime(2012, 7, 1)
common_setup = """from pandas_vb_common import *
import os
@@ -28,7 +28,7 @@ def remove(f):
store.put('df1',df)
"""
-read_store = Benchmark("store.get('df1')", setup1, cleanup = "store.close()",
+read_store = Benchmark("store.get('df1')", setup1, cleanup="store.close()",
start_date=start_date)
@@ -44,8 +44,9 @@ def remove(f):
store = HDFStore(f)
"""
-write_store = Benchmark("store.put('df2',df)", setup2, cleanup = "store.close()",
- start_date=start_date)
+write_store = Benchmark(
+ "store.put('df2',df)", setup2, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
# get from a store (mixed)
@@ -63,8 +64,9 @@ def remove(f):
store.put('df3',df)
"""
-read_store_mixed = Benchmark("store.get('df3')", setup3, cleanup = "store.close()",
- start_date=start_date)
+read_store_mixed = Benchmark(
+ "store.get('df3')", setup3, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
@@ -82,8 +84,9 @@ def remove(f):
store = HDFStore(f)
"""
-write_store_mixed = Benchmark("store.put('df4',df)", setup4, cleanup = "store.close()",
- start_date=start_date)
+write_store_mixed = Benchmark(
+ "store.put('df4',df)", setup4, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
# get from a table (mixed)
@@ -102,8 +105,9 @@ def remove(f):
store.append('df5',df)
"""
-read_store_table_mixed = Benchmark("store.select('df5')", setup5, cleanup = "store.close()",
- start_date=start_date)
+read_store_table_mixed = Benchmark(
+ "store.select('df5')", setup5, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
@@ -121,8 +125,9 @@ def remove(f):
store = HDFStore(f)
"""
-write_store_table_mixed = Benchmark("store.append('df6',df)", setup6, cleanup = "store.close()",
- start_date=start_date)
+write_store_table_mixed = Benchmark(
+ "store.append('df6',df)", setup6, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
# select from a table
@@ -138,8 +143,9 @@ def remove(f):
store.append('df7',df)
"""
-read_store_table = Benchmark("store.select('df7')", setup7, cleanup = "store.close()",
- start_date=start_date)
+read_store_table = Benchmark(
+ "store.select('df7')", setup7, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
@@ -154,8 +160,9 @@ def remove(f):
store = HDFStore(f)
"""
-write_store_table = Benchmark("store.append('df8',df)", setup8, cleanup = "store.close()",
- start_date=start_date)
+write_store_table = Benchmark(
+ "store.append('df8',df)", setup8, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
# get from a table (wide)
@@ -168,8 +175,9 @@ def remove(f):
store.append('df9',df)
"""
-read_store_table_wide = Benchmark("store.select('df9')", setup9, cleanup = "store.close()",
- start_date=start_date)
+read_store_table_wide = Benchmark(
+ "store.select('df9')", setup9, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
@@ -182,8 +190,9 @@ def remove(f):
store = HDFStore(f)
"""
-write_store_table_wide = Benchmark("store.append('df10',df)", setup10, cleanup = "store.close()",
- start_date=start_date)
+write_store_table_wide = Benchmark(
+ "store.append('df10',df)", setup10, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
# get from a table (wide) (indexed)
@@ -198,8 +207,9 @@ def remove(f):
store.create_table_index('df11')
"""
-query_store_table_wide = Benchmark("store.select('df11', [ ('index', '>', df.index[10000]), ('index', '<', df.index[15000]) ])", setup11, cleanup = "store.close()",
- start_date=start_date)
+query_store_table_wide = Benchmark(
+ "store.select('df11', [ ('index', '>', df.index[10000]), ('index', '<', df.index[15000]) ])", setup11, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
@@ -217,8 +227,9 @@ def remove(f):
store.create_table_index('df12')
"""
-query_store_table = Benchmark("store.select('df12', [ ('index', '>', df.index[10000]), ('index', '<', df.index[15000]) ])", setup12, cleanup = "store.close()",
- start_date=start_date)
+query_store_table = Benchmark(
+ "store.select('df12', [ ('index', '>', df.index[10000]), ('index', '<', df.index[15000]) ])", setup12, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
# select from a panel table
@@ -232,8 +243,9 @@ def remove(f):
store.append('p1',p)
"""
-read_store_table_panel = Benchmark("store.select('p1')", setup13, cleanup = "store.close()",
- start_date=start_date)
+read_store_table_panel = Benchmark(
+ "store.select('p1')", setup13, cleanup="store.close()",
+ start_date=start_date)
#----------------------------------------------------------------------
@@ -247,6 +259,6 @@ def remove(f):
store = HDFStore(f)
"""
-write_store_table_panel = Benchmark("store.append('p2',p)", setup14, cleanup = "store.close()",
- start_date=start_date)
-
+write_store_table_panel = Benchmark(
+ "store.append('p2',p)", setup14, cleanup="store.close()",
+ start_date=start_date)
diff --git a/vb_suite/indexing.py b/vb_suite/indexing.py
index c0f9c5a59e182..0c4898089a97f 100644
--- a/vb_suite/indexing.py
+++ b/vb_suite/indexing.py
@@ -42,7 +42,7 @@
"""
statement = "df[col][idx]"
bm_df_getitem = Benchmark(statement, setup,
- name='dataframe_getitem_scalar')
+ name='dataframe_getitem_scalar')
setup = common_setup + """
try:
@@ -59,7 +59,7 @@
"""
statement = "df[col][idx]"
bm_df_getitem2 = Benchmark(statement, setup,
- name='datamatrix_getitem_scalar')
+ name='datamatrix_getitem_scalar')
setup = common_setup + """
try:
@@ -104,9 +104,9 @@
midx = midx.take(np.random.permutation(np.arange(100000)))
"""
sort_level_zero = Benchmark("midx.sortlevel(0)", setup,
- start_date=datetime(2012,1,1))
+ start_date=datetime(2012, 1, 1))
sort_level_one = Benchmark("midx.sortlevel(1)", setup,
- start_date=datetime(2012,1,1))
+ start_date=datetime(2012, 1, 1))
#----------------------------------------------------------------------
# Panel subset selection
diff --git a/vb_suite/join_merge.py b/vb_suite/join_merge.py
index d031e78b05ece..0b158e4173a5c 100644
--- a/vb_suite/join_merge.py
+++ b/vb_suite/join_merge.py
@@ -86,10 +86,8 @@
# DataFrame joins on index
-
#----------------------------------------------------------------------
# Merges
-
setup = common_setup + """
N = 10000
diff --git a/vb_suite/make.py b/vb_suite/make.py
index b17a19030d971..5a8a8215db9a4 100755
--- a/vb_suite/make.py
+++ b/vb_suite/make.py
@@ -25,11 +25,13 @@
SPHINX_BUILD = 'sphinxbuild'
+
def upload():
'push a copy to the site'
os.system('cd build/html; rsync -avz . pandas@pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/vbench/ -essh')
+
def clean():
if os.path.exists('build'):
shutil.rmtree('build')
@@ -37,12 +39,14 @@ def clean():
if os.path.exists('source/generated'):
shutil.rmtree('source/generated')
+
def html():
check_build()
if os.system('sphinx-build -P -b html -d build/doctrees '
'source build/html'):
raise SystemExit("Building HTML failed.")
+
def check_build():
build_dirs = [
'build', 'build/doctrees', 'build/html',
@@ -54,10 +58,12 @@ def check_build():
except OSError:
pass
+
def all():
clean()
html()
+
def auto_update():
msg = ''
try:
@@ -69,6 +75,7 @@ def auto_update():
msg += str(inst) + '\n'
sendmail(msg)
+
def sendmail(err_msg=None):
from_name, to_name = _get_config()
@@ -98,6 +105,7 @@ def sendmail(err_msg=None):
finally:
server.close()
+
def _get_dir(subdir=None):
import getpass
USERNAME = getpass.getuser()
@@ -111,6 +119,7 @@ def _get_dir(subdir=None):
conf_dir = '%s%s' % (HOME, subdir)
return conf_dir
+
def _get_credentials():
tmp_dir = _get_dir()
cred = '%s/credentials' % tmp_dir
@@ -125,6 +134,7 @@ def _get_credentials():
return server, port, login, pwd
+
def _get_config():
tmp_dir = _get_dir()
with open('%s/addresses' % tmp_dir, 'r') as fh:
@@ -132,26 +142,26 @@ def _get_config():
return from_name, to_name
funcd = {
- 'html' : html,
- 'clean' : clean,
- 'upload' : upload,
- 'auto_update' : auto_update,
- 'all' : all,
- }
+ 'html': html,
+ 'clean': clean,
+ 'upload': upload,
+ 'auto_update': auto_update,
+ 'all': all,
+}
small_docs = False
# current_dir = os.getcwd()
# os.chdir(os.path.dirname(os.path.join(current_dir, __file__)))
-if len(sys.argv)>1:
+if len(sys.argv) > 1:
for arg in sys.argv[1:]:
func = funcd.get(arg)
if func is None:
- raise SystemExit('Do not know how to handle %s; valid args are %s'%(
- arg, funcd.keys()))
+ raise SystemExit('Do not know how to handle %s; valid args are %s' % (
+ arg, funcd.keys()))
func()
else:
small_docs = False
all()
-#os.chdir(current_dir)
+# os.chdir(current_dir)
diff --git a/vb_suite/measure_memory_consumption.py b/vb_suite/measure_memory_consumption.py
index cdc2fe0d4b1f1..bb73cf5da4302 100755
--- a/vb_suite/measure_memory_consumption.py
+++ b/vb_suite/measure_memory_consumption.py
@@ -8,6 +8,7 @@
long summary
"""
+
def main():
import shutil
import tempfile
@@ -21,27 +22,28 @@ def main():
from memory_profiler import memory_usage
- warnings.filterwarnings('ignore',category=FutureWarning)
+ warnings.filterwarnings('ignore', category=FutureWarning)
try:
- TMP_DIR = tempfile.mkdtemp()
- runner = BenchmarkRunner(benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
- TMP_DIR, PREPARE, always_clean=True,
- # run_option='eod', start_date=START_DATE,
- module_dependencies=dependencies)
+ TMP_DIR = tempfile.mkdtemp()
+ runner = BenchmarkRunner(
+ benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
+ TMP_DIR, PREPARE, always_clean=True,
+ # run_option='eod', start_date=START_DATE,
+ module_dependencies=dependencies)
results = {}
for b in runner.benchmarks:
- k=b.name
+ k = b.name
try:
- vs=memory_usage((b.run,))
+ vs = memory_usage((b.run,))
v = max(vs)
- #print(k, v)
- results[k]=v
+ # print(k, v)
+ results[k] = v
except Exception as e:
print("Exception caught in %s\n" % k)
print(str(e))
- s=Series(results)
+ s = Series(results)
s.sort()
print((s))
@@ -49,6 +51,5 @@ def main():
shutil.rmtree(TMP_DIR)
-
if __name__ == "__main__":
- main()
+ main()
diff --git a/vb_suite/perf_HEAD.py b/vb_suite/perf_HEAD.py
index 4f5f09f0df592..aa647866925b7 100755
--- a/vb_suite/perf_HEAD.py
+++ b/vb_suite/perf_HEAD.py
@@ -10,19 +10,21 @@
import urllib2
import json
-import pandas as pd
+import pandas as pd
WEB_TIMEOUT = 10
+
def get_travis_data():
"""figure out what worker we're running on, and the number of jobs it's running
"""
import os
- jobid=os.environ.get("TRAVIS_JOB_ID")
+ jobid = os.environ.get("TRAVIS_JOB_ID")
if not jobid:
return None, None
- workers=json.loads(urllib2.urlopen("https://api.travis-ci.org/workers/").read())
+ workers = json.loads(
+ urllib2.urlopen("https://api.travis-ci.org/workers/").read())
host = njobs = None
for item in workers:
@@ -31,10 +33,12 @@ def get_travis_data():
if id and str(id) == str(jobid):
break
if host:
- njobs = len([x for x in workers if host in x['host'] and x['payload']])
+ njobs = len(
+ [x for x in workers if host in x['host'] and x['payload']])
return host, njobs
+
def get_utcdatetime():
try:
from datetime import datetime
@@ -42,26 +46,28 @@ def get_utcdatetime():
except:
pass
-def dump_as_gist(data,desc="The Commit",njobs=None):
+
+def dump_as_gist(data, desc="The Commit", njobs=None):
host, njobs2 = get_travis_data()[:2]
- if njobs: # be slightly more reliable
- njobs= max(njobs,njobs2)
+ if njobs: # be slightly more reliable
+ njobs = max(njobs, njobs2)
- content=dict(version="0.1.1",
- timings=data,
- datetime=get_utcdatetime() , # added in 0.1.1
- hostname=host , # added in 0.1.1
- njobs=njobs # added in 0.1.1, a measure of load on the travis box
- )
+ content = dict(version="0.1.1",
+ timings=data,
+ datetime=get_utcdatetime(), # added in 0.1.1
+ hostname=host, # added in 0.1.1
+ njobs=njobs # added in 0.1.1, a measure of load on the travis box
+ )
- payload=dict(description=desc,
- public=True,
- files={'results.json': dict(content=json.dumps(content))})
+ payload = dict(description=desc,
+ public=True,
+ files={'results.json': dict(content=json.dumps(content))})
try:
- r=urllib2.urlopen("https://api.github.com/gists", json.dumps(payload),timeout=WEB_TIMEOUT)
- if 200 <=r.getcode() <300:
- print("\n\n"+ "-"*80)
+ r = urllib2.urlopen("https://api.github.com/gists",
+ json.dumps(payload), timeout=WEB_TIMEOUT)
+ if 200 <= r.getcode() < 300:
+ print("\n\n" + "-" * 80)
gist = json.loads(r.read())
file_raw_url = gist['files'].items()[0][1]['raw_url']
@@ -69,48 +75,49 @@ def dump_as_gist(data,desc="The Commit",njobs=None):
print("[vbench-html-url] %s" % gist['html_url'])
print("[vbench-api-url] %s" % gist['url'])
- print("-"*80 +"\n\n")
+ print("-" * 80 + "\n\n")
else:
print("api.github.com returned status %d" % r.getcode())
except:
print("Error occured while dumping to gist")
+
def main():
import warnings
from suite import benchmarks
- exit_code=0
- warnings.filterwarnings('ignore',category=FutureWarning)
+ exit_code = 0
+ warnings.filterwarnings('ignore', category=FutureWarning)
host, njobs = get_travis_data()[:2]
- results=[]
+ results = []
for b in benchmarks:
try:
- d=b.run()
+ d = b.run()
d.update(dict(name=b.name))
results.append(d)
- msg="{name:<40}: {timing:> 10.4f} [ms]"
+ msg = "{name:<40}: {timing:> 10.4f} [ms]"
print(msg.format(name=results[-1]['name'],
timing=results[-1]['timing']))
except Exception as e:
- exit_code=1
+ exit_code = 1
if (type(e) == KeyboardInterrupt or
- 'KeyboardInterrupt' in str(d)) :
+ 'KeyboardInterrupt' in str(d)):
raise KeyboardInterrupt()
- msg="{name:<40}: ERROR:\n<-------"
+ msg = "{name:<40}: ERROR:\n<-------"
print(msg.format(name=results[-1]['name']))
- if isinstance(d,dict):
+ if isinstance(d, dict):
if d['succeeded']:
print("\nException:\n%s\n" % str(e))
else:
- for k,v in sorted(d.iteritems()):
- print("{k}: {v}".format(k=k,v=v))
+ for k, v in sorted(d.iteritems()):
+ print("{k}: {v}".format(k=k, v=v))
print("------->\n")
- dump_as_gist(results,"testing",njobs=njobs)
+ dump_as_gist(results, "testing", njobs=njobs)
return exit_code
@@ -122,102 +129,111 @@ def main():
#####################################################
# functions for retrieving and processing the results
+
def get_vbench_log(build_url):
- r=urllib2.urlopen(build_url)
+ r = urllib2.urlopen(build_url)
if not (200 <= r.getcode() < 300):
return
- s=json.loads(r.read())
- s=[x for x in s['matrix'] if "VBENCH" in ((x.get('config',{}) or {}).get('env',{}) or {})]
- #s=[x for x in s['matrix']]
+ s = json.loads(r.read())
+ s = [x for x in s['matrix'] if "VBENCH" in ((x.get('config', {})
+ or {}).get('env', {}) or {})]
+ # s=[x for x in s['matrix']]
if not s:
return
- id=s[0]['id'] # should be just one for now
- r2=urllib2.urlopen("https://api.travis-ci.org/jobs/%s" % id)
+ id = s[0]['id'] # should be just one for now
+ r2 = urllib2.urlopen("https://api.travis-ci.org/jobs/%s" % id)
if (not 200 <= r.getcode() < 300):
return
- s2=json.loads(r2.read())
+ s2 = json.loads(r2.read())
return s2.get('log')
+
def get_results_raw_url(build):
"Taks a Travis a build number, retrieves the build log and extracts the gist url"
import re
- log=get_vbench_log("https://api.travis-ci.org/builds/%s" % build)
+ log = get_vbench_log("https://api.travis-ci.org/builds/%s" % build)
if not log:
return
- l=[x.strip() for x in log.split("\n") if re.match(".vbench-gist-raw_url",x)]
+ l = [x.strip(
+ ) for x in log.split("\n") if re.match(".vbench-gist-raw_url", x)]
if l:
- s=l[0]
- m = re.search("(https://[^\s]+)",s)
+ s = l[0]
+ m = re.search("(https://[^\s]+)", s)
if m:
return m.group(0)
+
def convert_json_to_df(results_url):
"""retrieve json results file from url and return df
df contains timings for all successful vbenchmarks
"""
- res=json.loads(urllib2.urlopen(results_url).read())
- timings=res.get("timings")
+ res = json.loads(urllib2.urlopen(results_url).read())
+ timings = res.get("timings")
if not timings:
return
- res=[x for x in timings if x.get('succeeded')]
+ res = [x for x in timings if x.get('succeeded')]
df = pd.DataFrame(res)
df = df.set_index("name")
return df
+
def get_build_results(build):
"Returns a df with the results of the VBENCH job associated with the travis build"
- r_url=get_results_raw_url(build)
+ r_url = get_results_raw_url(build)
if not r_url:
return
return convert_json_to_df(r_url)
-def get_all_results(repo_id=53976): # travis pydata/pandas id
- """Fetches the VBENCH results for all travis builds, and returns a list of result df
-
- unsuccesful individual vbenches are dropped.
- """
- from collections import OrderedDict
- def get_results_from_builds(builds):
- dfs=OrderedDict()
- for build in builds:
- build_id = build['id']
- build_number = build['number']
- print(build_number)
- res = get_build_results(build_id)
- if res is not None:
- dfs[build_number]=res
- return dfs
-
- base_url='https://api.travis-ci.org/builds?url=%2Fbuilds&repository_id={repo_id}'
- url=base_url.format(repo_id=repo_id)
- url_after=url+'&after_number={after}'
- dfs=OrderedDict()
-
- while True:
- r=urllib2.urlopen(url)
- if not (200 <= r.getcode() < 300):
- break
- builds=json.loads(r.read())
- res = get_results_from_builds(builds)
- if not res:
- break
- last_build_number= min(res.keys())
- dfs.update(res)
- url=url_after.format(after=last_build_number)
-
- return dfs
+
+def get_all_results(repo_id=53976): # travis pydata/pandas id
+ """Fetches the VBENCH results for all travis builds, and returns a list of result df
+
+ unsuccesful individual vbenches are dropped.
+ """
+ from collections import OrderedDict
+
+ def get_results_from_builds(builds):
+ dfs = OrderedDict()
+ for build in builds:
+ build_id = build['id']
+ build_number = build['number']
+ print(build_number)
+ res = get_build_results(build_id)
+ if res is not None:
+ dfs[build_number] = res
+ return dfs
+
+ base_url = 'https://api.travis-ci.org/builds?url=%2Fbuilds&repository_id={repo_id}'
+ url = base_url.format(repo_id=repo_id)
+ url_after = url + '&after_number={after}'
+ dfs = OrderedDict()
+
+ while True:
+ r = urllib2.urlopen(url)
+ if not (200 <= r.getcode() < 300):
+ break
+ builds = json.loads(r.read())
+ res = get_results_from_builds(builds)
+ if not res:
+ break
+ last_build_number = min(res.keys())
+ dfs.update(res)
+ url = url_after.format(after=last_build_number)
+
+ return dfs
+
def get_all_results_joined(repo_id=53976):
- def mk_unique(df):
- for dupe in df.index.get_duplicates():
- df=df.ix[df.index != dupe]
- return df
- dfs = get_all_results(repo_id)
- for k in dfs:
- dfs[k]=mk_unique(dfs[k])
- ss=[pd.Series(v.timing,name=k) for k,v in dfs.iteritems()]
- results = pd.concat(reversed(ss),1)
- return results
+ def mk_unique(df):
+ for dupe in df.index.get_duplicates():
+ df = df.ix[df.index != dupe]
+ return df
+ dfs = get_all_results(repo_id)
+ for k in dfs:
+ dfs[k] = mk_unique(dfs[k])
+ ss = [pd.Series(v.timing, name=k) for k, v in dfs.iteritems()]
+ results = pd.concat(reversed(ss), 1)
+ return results
diff --git a/vb_suite/reindex.py b/vb_suite/reindex.py
index 2523462eb4e4b..2f675636ee928 100644
--- a/vb_suite/reindex.py
+++ b/vb_suite/reindex.py
@@ -136,7 +136,7 @@ def backfill():
statement = "df.drop_duplicates(['key1', 'key2'], inplace=True)"
frame_drop_dup_inplace = Benchmark(statement, setup,
- start_date=datetime(2012, 5, 16))
+ start_date=datetime(2012, 5, 16))
lib_fast_zip = Benchmark('lib.fast_zip(col_array_list)', setup,
name='lib_fast_zip',
@@ -154,7 +154,7 @@ def backfill():
statement2 = "df.drop_duplicates(['key1', 'key2'], inplace=True)"
frame_drop_dup_na_inplace = Benchmark(statement2, setup,
- start_date=datetime(2012, 5, 16))
+ start_date=datetime(2012, 5, 16))
setup = common_setup + """
s = Series(np.random.randint(0, 1000, size=10000))
diff --git a/vb_suite/run_suite.py b/vb_suite/run_suite.py
index 0c03d17607f4e..43bf24faae43a 100755
--- a/vb_suite/run_suite.py
+++ b/vb_suite/run_suite.py
@@ -2,6 +2,7 @@
from vbench.api import BenchmarkRunner
from suite import *
+
def run_process():
runner = BenchmarkRunner(benchmarks, REPO_PATH, REPO_URL,
BUILD, DB_PATH, TMP_DIR, PREPARE,
diff --git a/vb_suite/source/conf.py b/vb_suite/source/conf.py
index 35a89ba8e1de6..d83448fd97d09 100644
--- a/vb_suite/source/conf.py
+++ b/vb_suite/source/conf.py
@@ -10,12 +10,13 @@
# All configuration values have a default; values that are commented out
# serve to show the default.
-import sys, os
+import sys
+import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
-#sys.path.append(os.path.abspath('.'))
+# sys.path.append(os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../sphinxext'))
sys.path.extend([
@@ -27,7 +28,7 @@
])
-# -- General configuration -----------------------------------------------------
+# -- General configuration -----------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sphinxext.
@@ -42,7 +43,7 @@
source_suffix = '.rst'
# The encoding of source files.
-#source_encoding = 'utf-8'
+# source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
@@ -69,43 +70,43 @@
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
-#language = None
+# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
-#today = ''
+# today = ''
# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
+# today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
-#unused_docs = []
+# unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = []
# The reST default role (used for this markup: `text`) to use for all documents.
-#default_role = None
+# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
+# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
-#add_module_names = True
+# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
-#show_authors = False
+# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
-#modindex_common_prefix = []
+# modindex_common_prefix = []
-# -- Options for HTML output ---------------------------------------------------
+# -- Options for HTML output ---------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
@@ -114,12 +115,12 @@
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
-#html_style = 'statsmodels.css'
+# html_style = 'statsmodels.css'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
-#html_theme_options = {}
+# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['themes']
@@ -129,16 +130,16 @@
html_title = 'Vbench performance benchmarks for pandas'
# A shorter title for the navigation bar. Default is the same as html_title.
-#html_short_title = None
+# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
-#html_logo = None
+# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
-#html_favicon = None
+# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
@@ -147,75 +148,75 @@
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
-#html_last_updated_fmt = '%b %d, %Y'
+# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
-#html_use_smartypants = True
+# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
-#html_sidebars = {}
+# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
-#html_additional_pages = {}
+# html_additional_pages = {}
# If false, no module index is generated.
html_use_modindex = True
# If false, no index is generated.
-#html_use_index = True
+# html_use_index = True
# If true, the index is split into individual pages for each letter.
-#html_split_index = False
+# html_split_index = False
# If true, links to the reST sources are added to the pages.
-#html_show_sourcelink = True
+# html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
-#html_use_opensearch = ''
+# html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
-#html_file_suffix = ''
+# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'performance'
-# -- Options for LaTeX output --------------------------------------------------
+# -- Options for LaTeX output --------------------------------------------
# The paper size ('letter' or 'a4').
-#latex_paper_size = 'letter'
+# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
-#latex_font_size = '10pt'
+# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
- ('index', 'performance.tex',
- u'pandas vbench Performance Benchmarks',
- u'Wes McKinney', 'manual'),
+ ('index', 'performance.tex',
+ u'pandas vbench Performance Benchmarks',
+ u'Wes McKinney', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
-#latex_logo = None
+# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
-#latex_use_parts = False
+# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
-#latex_preamble = ''
+# latex_preamble = ''
# Documents to append as an appendix to all manuals.
-#latex_appendices = []
+# latex_appendices = []
# If false, no module index is generated.
-#latex_use_modindex = True
+# latex_use_modindex = True
# Example configuration for intersphinx: refer to the Python standard library.
diff --git a/vb_suite/sparse.py b/vb_suite/sparse.py
index 24f70d284956b..bfee959ab982f 100644
--- a/vb_suite/sparse.py
+++ b/vb_suite/sparse.py
@@ -26,7 +26,7 @@
stmt = "SparseDataFrame(series)"
bm_sparse1 = Benchmark(stmt, setup, name="sparse_series_to_frame",
- start_date=datetime(2011, 6, 1))
+ start_date=datetime(2011, 6, 1))
setup = common_setup + """
diff --git a/vb_suite/suite.py b/vb_suite/suite.py
index e44eda802dc71..380ca5e5fb3b6 100644
--- a/vb_suite/suite.py
+++ b/vb_suite/suite.py
@@ -57,7 +57,7 @@
DB_PATH = config.get('setup', 'db_path')
TMP_DIR = config.get('setup', 'tmp_dir')
except:
- REPO_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__),"../"))
+ REPO_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__), "../"))
REPO_URL = 'git@github.com:pydata/pandas.git'
DB_PATH = os.path.join(REPO_PATH, 'vb_suite/benchmarks.db')
TMP_DIR = os.path.join(HOME, 'tmp/vb_pandas')
@@ -78,7 +78,8 @@
# HACK!
-#timespan = [datetime(2011, 1, 1), datetime(2012, 1, 1)]
+# timespan = [datetime(2011, 1, 1), datetime(2012, 1, 1)]
+
def generate_rst_files(benchmarks):
import matplotlib as mpl
diff --git a/vb_suite/test.py b/vb_suite/test.py
index b565ea37fde9f..da30c3e1a5f76 100644
--- a/vb_suite/test.py
+++ b/vb_suite/test.py
@@ -22,8 +22,10 @@
bmk = 'e0e651a8e9fbf0270ab68137f8b9df5f'
bmk = '96bda4b9a60e17acf92a243580f2a0c3'
+
def get_results(bmk):
- results = con.execute("select * from results where checksum='%s'" % bmk).fetchall()
+ results = con.execute(
+ "select * from results where checksum='%s'" % bmk).fetchall()
x = Series(dict((t[1], t[3]) for t in results))
x.index = x.index.map(repo.timestamps.get)
x = x.sort_index()
@@ -31,6 +33,7 @@ def get_results(bmk):
x = get_results(bmk)
+
def graph1():
dm_getitem = get_results('459225186023853494bc345fd180f395')
dm_getvalue = get_results('c22ca82e0cfba8dc42595103113c7da3')
@@ -44,6 +47,7 @@ def graph1():
plt.ylabel('ms')
plt.legend(loc='best')
+
def graph2():
bm = get_results('96bda4b9a60e17acf92a243580f2a0c3')
plt.figure()
@@ -61,4 +65,3 @@ def graph2():
plt.xlim([bm.dropna().index[0] - datetools.MonthEnd(),
bm.dropna().index[-1] + datetools.MonthEnd()])
plt.ylabel('ms')
-
diff --git a/vb_suite/test_perf.py b/vb_suite/test_perf.py
index 207a4eb8a4db5..c993e62de450b 100755
--- a/vb_suite/test_perf.py
+++ b/vb_suite/test_perf.py
@@ -33,17 +33,17 @@
import time
DEFAULT_MIN_DURATION = 0.01
-BASELINE_COMMIT = '2149c50' # 0.9.1 + regression fix + vb fixes # TODO: detect upstream/master
+BASELINE_COMMIT = '2149c50' # 0.9.1 + regression fix + vb fixes # TODO: detect upstream/master
parser = argparse.ArgumentParser(description='Use vbench to generate a report comparing performance between two commits.')
parser.add_argument('-a', '--auto',
help='Execute a run using the defaults for the base and target commits.',
action='store_true',
default=False)
-parser.add_argument('-b','--base-commit',
+parser.add_argument('-b', '--base-commit',
help='The commit serving as performance baseline (default: %s).' % BASELINE_COMMIT,
type=str)
-parser.add_argument('-t','--target-commit',
+parser.add_argument('-t', '--target-commit',
help='The commit to compare against the baseline (default: HEAD).',
type=str)
parser.add_argument('-m', '--min-duration',
@@ -55,7 +55,8 @@
dest='log_file',
help='path of file in which to save the report (default: vb_suite.log).')
-def get_results_df(db,rev):
+
+def get_results_df(db, rev):
from pandas import DataFrame
"""Takes a git commit hash and returns a Dataframe of benchmark results
"""
@@ -68,8 +69,10 @@ def get_results_df(db,rev):
results = results.join(bench['name'], on='checksum').set_index("checksum")
return results
+
def prprint(s):
- print("*** %s"%s)
+ print("*** %s" % s)
+
def main():
from pandas import DataFrame
@@ -86,77 +89,85 @@ def main():
args.target_commit = args.target_commit[:7]
if not args.log_file:
- args.log_file = os.path.abspath(os.path.join(REPO_PATH, 'vb_suite.log'))
+ args.log_file = os.path.abspath(
+ os.path.join(REPO_PATH, 'vb_suite.log'))
- TMP_DIR = tempfile.mkdtemp()
+ TMP_DIR = tempfile.mkdtemp()
prprint("TMP_DIR = %s" % TMP_DIR)
prprint("LOG_FILE = %s\n" % args.log_file)
try:
logfile = open(args.log_file, 'w')
- prprint( "Opening DB at '%s'...\n" % DB_PATH)
+ prprint("Opening DB at '%s'...\n" % DB_PATH)
db = BenchmarkDB(DB_PATH)
prprint("Initializing Runner...")
- runner = BenchmarkRunner(benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
- TMP_DIR, PREPARE, always_clean=True,
- # run_option='eod', start_date=START_DATE,
- module_dependencies=dependencies)
+ runner = BenchmarkRunner(
+ benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
+ TMP_DIR, PREPARE, always_clean=True,
+ # run_option='eod', start_date=START_DATE,
+ module_dependencies=dependencies)
- repo = runner.repo #(steal the parsed git repo used by runner)
+ repo = runner.repo # (steal the parsed git repo used by runner)
# ARGH. reparse the repo, without discarding any commits,
# then overwrite the previous parse results
- #prprint ("Slaughtering kittens..." )
+ # prprint ("Slaughtering kittens..." )
(repo.shas, repo.messages,
repo.timestamps, repo.authors) = _parse_commit_log(REPO_PATH)
h_head = args.target_commit or repo.shas[-1]
h_baseline = args.base_commit
- prprint('Target [%s] : %s\n' % (h_head, repo.messages.get(h_head,"")))
- prprint('Baseline [%s] : %s\n' % (h_baseline,repo.messages.get(h_baseline,"")))
+ prprint('Target [%s] : %s\n' % (h_head, repo.messages.get(h_head, "")))
+ prprint('Baseline [%s] : %s\n' % (h_baseline,
+ repo.messages.get(h_baseline, "")))
- prprint ("removing any previous measurements for the commits." )
+ prprint("removing any previous measurements for the commits.")
db.delete_rev_results(h_baseline)
db.delete_rev_results(h_head)
# TODO: we could skip this, but we need to make sure all
# results are in the DB, which is a little tricky with
# start dates and so on.
- prprint( "Running benchmarks for baseline [%s]" % h_baseline)
+ prprint("Running benchmarks for baseline [%s]" % h_baseline)
runner._run_and_write_results(h_baseline)
- prprint ("Running benchmarks for target [%s]" % h_head)
+ prprint("Running benchmarks for target [%s]" % h_head)
runner._run_and_write_results(h_head)
- prprint( 'Processing results...')
+ prprint('Processing results...')
- head_res = get_results_df(db,h_head)
- baseline_res = get_results_df(db,h_baseline)
- ratio = head_res['timing']/baseline_res['timing']
+ head_res = get_results_df(db, h_head)
+ baseline_res = get_results_df(db, h_baseline)
+ ratio = head_res['timing'] / baseline_res['timing']
totals = DataFrame(dict(t_head=head_res['timing'],
t_baseline=baseline_res['timing'],
ratio=ratio,
- name=baseline_res.name),columns=["t_head","t_baseline","ratio","name"])
- totals = totals.ix[totals.t_head > args.min_duration] # ignore below threshold
- totals = totals.dropna().sort("ratio").set_index('name') # sort in ascending order
+ name=baseline_res.name), columns=["t_head", "t_baseline", "ratio", "name"])
+ totals = totals.ix[totals.t_head > args.min_duration]
+ # ignore below threshold
+ totals = totals.dropna(
+ ).sort("ratio").set_index('name') # sort in ascending order
s = "\n\nResults:\n"
- s += totals.to_string(float_format=lambda x: "{:4.4f}".format(x).rjust(10))
+ s += totals.to_string(
+ float_format=lambda x: "{:4.4f}".format(x).rjust(10))
s += "\n\n"
s += "Columns: test_name | target_duration [ms] | baseline_duration [ms] | ratio\n\n"
s += "- a Ratio of 1.30 means the target commit is 30% slower then the baseline.\n\n"
- s += 'Target [%s] : %s\n' % (h_head, repo.messages.get(h_head,""))
- s += 'Baseline [%s] : %s\n\n' % (h_baseline,repo.messages.get(h_baseline,""))
+ s += 'Target [%s] : %s\n' % (h_head, repo.messages.get(h_head, ""))
+ s += 'Baseline [%s] : %s\n\n' % (
+ h_baseline, repo.messages.get(h_baseline, ""))
logfile.write(s)
logfile.close()
- prprint(s )
- prprint("Results were also written to the logfile at '%s'\n" % args.log_file)
+ prprint(s)
+ prprint("Results were also written to the logfile at '%s'\n" %
+ args.log_file)
finally:
# print("Disposing of TMP_DIR: %s" % TMP_DIR)
@@ -172,7 +183,7 @@ def _parse_commit_log(repo_path):
from pandas import Series
git_cmd = 'git --git-dir=%s/.git --work-tree=%s ' % (repo_path, repo_path)
githist = git_cmd + ('log --graph --pretty=format:'
- '\"::%h::%cd::%s::%an\" > githist.txt')
+ '\"::%h::%cd::%s::%an\" > githist.txt')
os.system(githist)
githist = open('githist.txt').read()
os.remove('githist.txt')
@@ -182,7 +193,7 @@ def _parse_commit_log(repo_path):
messages = []
authors = []
for line in githist.split('\n'):
- if '*' not in line.split("::")[0]: # skip non-commit lines
+ if '*' not in line.split("::")[0]: # skip non-commit lines
continue
_, sha, stamp, message, author = line.split('::', 4)
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 1727db1285cd1..202aa5e4d5e61 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -42,13 +42,15 @@ def date_range(start=None, end=None, periods=None, freq=None):
"""
-timeseries_1min_5min_ohlc = Benchmark("ts[:10000].resample('5min', how='ohlc')",
- common_setup,
- start_date=datetime(2012, 5, 1))
+timeseries_1min_5min_ohlc = Benchmark(
+ "ts[:10000].resample('5min', how='ohlc')",
+ common_setup,
+ start_date=datetime(2012, 5, 1))
-timeseries_1min_5min_mean = Benchmark("ts[:10000].resample('5min', how='mean')",
- common_setup,
- start_date=datetime(2012, 5, 1))
+timeseries_1min_5min_mean = Benchmark(
+ "ts[:10000].resample('5min', how='mean')",
+ common_setup,
+ start_date=datetime(2012, 5, 1))
#----------------------------------------------------------------------
# Irregular alignment
@@ -92,7 +94,7 @@ def date_range(start=None, end=None, periods=None, freq=None):
dates = date_range('1/1/1990', periods=N * 10, freq='5s')
"""
timeseries_asof_single = Benchmark('ts.asof(dates[0])', setup,
- start_date=datetime(2012, 4, 27))
+ start_date=datetime(2012, 4, 27))
timeseries_asof = Benchmark('ts.asof(dates)', setup,
start_date=datetime(2012, 4, 27))
@@ -187,7 +189,7 @@ def date_range(start=None, end=None, periods=None, freq=None):
"""
dti_append_tz = \
- Benchmark('s1.append(slst)', setup, start_date=datetime(2012, 9 ,1))
+ Benchmark('s1.append(slst)', setup, start_date=datetime(2012, 9, 1))
setup = common_setup + """
rng = date_range('1/1/2000', periods=100000, freq='H')
@@ -195,7 +197,7 @@ def date_range(start=None, end=None, periods=None, freq=None):
"""
dti_reset_index = \
- Benchmark('df.reset_index()', setup, start_date=datetime(2012,9,1))
+ Benchmark('df.reset_index()', setup, start_date=datetime(2012, 9, 1))
setup = common_setup + """
rng = date_range('1/1/2000', periods=100000, freq='H')
@@ -203,7 +205,7 @@ def date_range(start=None, end=None, periods=None, freq=None):
"""
dti_reset_index_tz = \
- Benchmark('df.reset_index()', setup, start_date=datetime(2012,9,1))
+ Benchmark('df.reset_index()', setup, start_date=datetime(2012, 9, 1))
setup = common_setup + """
rng = date_range('1/1/2000', periods=10000, freq='T')
| Used [autopep8](https://github.com/hhatto/autopep8) to make all of pandas py files more PEP8 compliant (Excluding 230 `E501 lines too long` (over 80 chars), and 6 changes which had to be reverted to fix test failures - in second commit). See statistics in my comment.
I iterated over every py file in the package using [autopep8](https://github.com/hhatto/autopep8), inspected the output of [pep8](https://github.com/jcrocholl/pep8) to drill-down and consider any compliance issues found which may change the meaning of the code (and considered those individually).
I haven't included a unittest for PEP compliance using pep8.
For me this passes all the same tests as the master branch.
_Since this affects such a large set of the code it may not be reasonable to pull, however it is certainly a proof of concept._
| https://api.github.com/repos/pandas-dev/pandas/pulls/2632 | 2013-01-03T22:39:00Z | 2013-01-05T20:31:20Z | 2013-01-05T20:31:20Z | 2014-06-12T14:19:52Z |
BUG: Series op with datetime64 values misbehaves in numpy 1.6 #2629 | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 7dd9a84199ea8..e75471a34746a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -42,6 +42,7 @@
_np_version = np.version.short_version
_np_version_under1p6 = LooseVersion(_np_version) < '1.6'
+_np_version_under1p7 = LooseVersion(_np_version) < '1.7'
_SHOW_WARNINGS = True
@@ -71,19 +72,30 @@ def na_op(x, y):
def wrapper(self, other):
from pandas.core.frame import DataFrame
+ wrap_results = lambda x: x
+
+ lvalues, rvalues = self, other
+
+ if (com.is_datetime64_dtype(self) and
+ com.is_datetime64_dtype(other)):
+ lvalues = lvalues.view('i8')
+ rvalues = rvalues.view('i8')
+
+ wrap_results = lambda rs: rs.astype('timedelta64[ns]')
+
+ if isinstance(rvalues, Series):
+ lvalues = lvalues.values
+ rvalues = rvalues.values
+
- if isinstance(other, Series):
if self.index.equals(other.index):
name = _maybe_match_name(self, other)
- return Series(na_op(self.values, other.values),
+ return Series(wrap_results(na_op(lvalues, rvalues)),
index=self.index, name=name)
join_idx, lidx, ridx = self.index.join(other.index, how='outer',
return_indexers=True)
- lvalues = self.values
- rvalues = other.values
-
if lidx is not None:
lvalues = com.take_1d(lvalues, lidx)
@@ -98,7 +110,7 @@ def wrapper(self, other):
return NotImplemented
else:
# scalars
- return Series(na_op(self.values, other),
+ return Series(na_op(lvalues.values, rvalues),
index=self.index, name=self.name)
return wrapper
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index f0112e1eeea18..91fdc244b077f 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1543,6 +1543,13 @@ def test_operators_empty_int_corner(self):
# it works!
_ = s1 * s2
+ def test_operators_datetime64(self):
+ v1 = date_range('2012-1-1', periods=3, freq='D')
+ v2 = date_range('2012-1-2', periods=3, freq='D')
+ rs = Series(v2) - Series(v1)
+ xp = Series(1e9 * 3600 * 24, rs.index).astype('timedelta64[ns]')
+ assert_series_equal(rs, xp)
+
# NumPy limitiation =(
# def test_logical_range_select(self):
| converts to i8 then back to timedelta64[ns] if both datetime64
| https://api.github.com/repos/pandas-dev/pandas/pulls/2630 | 2013-01-03T20:24:17Z | 2013-01-20T02:33:06Z | null | 2014-06-25T19:18:04Z |
BUG: fix partial date parsing occuring on certain execution dates #2618 | diff --git a/RELEASE.rst b/RELEASE.rst
index 3d90267b0928b..c06e827693e88 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -67,8 +67,9 @@ pandas 0.10.1
- Fix platform issues with ``file:///`` in unit test (#2564)
- Fix bug and possible segfault when grouping by hierarchical level that
contains NA values (GH2616_)
- - Ensure that MultiIndex tuples can be constructed with NAs (seen in #2616)
- - Fix int64 overflow issue when unstacking MultiIndex with many levels (#2616)
+ - Ensure that MultiIndex tuples can be constructed with NAs (GH2616_)
+ - Fix int64 overflow issue when unstacking MultiIndex with many levels (GH2616_)
+ - Fix partial date parsing issue occuring only when code is run at EOM (GH2618_)
**API Changes**
@@ -83,6 +84,7 @@ pandas 0.10.1
.. _GH2585: https://github.com/pydata/pandas/issues/2585
.. _GH2604: https://github.com/pydata/pandas/issues/2604
.. _GH2616: https://github.com/pydata/pandas/issues/2616
+.. _GH2618: https://github.com/pydata/pandas/issues/2618
pandas 0.10.0
=============
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 30d809c9fba32..41ac1b3f3480f 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -513,20 +513,24 @@ def convert_sql_column(x):
return maybe_convert_objects(x, try_float=1)
def try_parse_dates(ndarray[object] values, parser=None,
- dayfirst=False):
+ dayfirst=False,default=None):
cdef:
Py_ssize_t i, n
ndarray[object] result
- from datetime import datetime
+ from datetime import datetime, timedelta
n = len(values)
result = np.empty(n, dtype='O')
if parser is None:
+ if default is None: # GH2618
+ date=datetime.now()
+ default=datetime(date.year,date.month,1)
+
try:
from dateutil.parser import parse
- parse_date = lambda x: parse(x, dayfirst=dayfirst)
+ parse_date = lambda x: parse(x, dayfirst=dayfirst,default=default)
except ImportError: # pragma: no cover
def parse_date(s):
try:
@@ -560,12 +564,12 @@ def try_parse_dates(ndarray[object] values, parser=None,
def try_parse_date_and_time(ndarray[object] dates, ndarray[object] times,
date_parser=None, time_parser=None,
- dayfirst=False):
+ dayfirst=False,default=None):
cdef:
Py_ssize_t i, n
ndarray[object] result
- from datetime import date, time, datetime
+ from datetime import date, time, datetime, timedelta
n = len(dates)
if len(times) != n:
@@ -573,9 +577,13 @@ def try_parse_date_and_time(ndarray[object] dates, ndarray[object] times,
result = np.empty(n, dtype='O')
if date_parser is None:
+ if default is None: # GH2618
+ date=datetime.now()
+ default=datetime(date.year,date.month,1)
+
try:
from dateutil.parser import parse
- parse_date = lambda x: parse(x, dayfirst=dayfirst)
+ parse_date = lambda x: parse(x, dayfirst=dayfirst, default=default)
except ImportError: # pragma: no cover
def parse_date(s):
try:
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 01e9197af3ac8..427477a3eec80 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1395,6 +1395,38 @@ def _simple_ts(start, end, freq='D'):
class TestDatetimeIndex(unittest.TestCase):
_multiprocess_can_split_ = True
+
+ def test_partial_date_at_eom(self):
+ from datetime import datetime
+ import time
+ def faketime():
+ return datetime(2012,12,31)
+ # GH2618
+ try:
+ _time = time.time
+ time.time = faketime
+ assert datetime.now().day == 31
+ res=lib.try_parse_dates(np.array(["1800 2"],dtype=object))
+ self.assertEqual(type(res[0]),datetime)
+ finally:
+ time.time = _time
+
+ def test_partial_date_and_time_at_eom(self):
+ from datetime import datetime
+ import time
+ def faketime():
+ return datetime(2012,12,31)
+ # GH2618
+ try:
+ _time = time.time
+ time.time = faketime
+ assert datetime.now().day == 31
+ res=lib.try_parse_date_and_time(np.array(["1800 2"],dtype=object),
+ np.array(["00:00:00"],dtype=object))
+ self.assertEqual(type(res[0]),datetime)
+ finally:
+ time.time = _time
+
def test_append_join_nondatetimeindex(self):
rng = date_range('1/1/2000', periods=10)
idx = Index(['a', 'b', 'c', 'd'])
| closes #2618.
when parsing partial dates, will fill in missing day as first of the month, rather then now().day
| https://api.github.com/repos/pandas-dev/pandas/pulls/2620 | 2012-12-31T12:16:57Z | 2013-03-06T20:22:32Z | null | 2022-10-13T00:14:46Z |
BUG: trivial fix in vb_bench/timeseries.py | diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 733deb8e7f470..1727db1285cd1 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -211,4 +211,4 @@ def date_range(start=None, end=None, periods=None, freq=None):
"""
datetimeindex_unique = Benchmark('index.unique()', setup,
- date=datetime(2012, 7, 1))
+ start_date=datetime(2012, 7, 1))
| https://api.github.com/repos/pandas-dev/pandas/pulls/2614 | 2012-12-29T16:49:53Z | 2012-12-29T17:13:14Z | 2012-12-29T17:13:14Z | 2014-07-10T11:41:13Z | |
BUG: fixed old version compatibility warnings | diff --git a/RELEASE.rst b/RELEASE.rst
index ee5bd061937b1..d45576933f941 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -34,9 +34,9 @@ pandas 0.10.1
**Improvements to existing features**
- ``HDFStore``
+
- enables storing of multi-index dataframes (closes GH1277_)
- - support data column indexing and selection, via ``data_columns`` keyword
- in append
+ - support data column indexing and selection, via ``data_columns`` keyword in append
- support write chunking to reduce memory footprint, via ``chunksize``
keyword to append
- support automagic indexing via ``index`` keywork to append
@@ -50,6 +50,8 @@ pandas 0.10.1
to do multiple-table append/selection
- added support for datetime64 in columns
- added method ``unique`` to select the unique values in an indexable or data column
+ - added method ``copy`` to copy an existing store (and possibly upgrade)
+ - show the shape of the data on disk for non-table stores when printing the store
- Add ``logx`` option to DataFrame/Series.plot (GH2327_, #2565)
- Support reading gzipped data from file-like object
- ``pivot_table`` aggfunc can be anything used in GroupBy.aggregate (GH2643_)
@@ -57,11 +59,13 @@ pandas 0.10.1
**Bug fixes**
- ``HDFStore``
+
- correctly handle ``nan`` elements in string columns; serialize via the
``nan_rep`` keyword to append
- raise correctly on non-implemented column types (unicode/date)
- handle correctly ``Term`` passed types (e.g. ``index<1000``, when index
is ``Int64``), (closes GH512_)
+ - handle Timestamp correctly in data_columns (closes GH2637_)
- Fix DataFrame.info bug with UTF8-encoded columns. (GH2576_)
- Fix DatetimeIndex handling of FixedOffset tz (GH2604_)
- More robust detection of being in IPython session for wide DataFrame
@@ -76,18 +80,21 @@ pandas 0.10.1
**API Changes**
- ``HDFStore``
+
+ - refactored HFDStore to deal with non-table stores as objects, will allow future enhancements
- removed keyword ``compression`` from ``put`` (replaced by keyword
``complib`` to be consistent across library)
.. _GH512: https://github.com/pydata/pandas/issues/512
.. _GH1277: https://github.com/pydata/pandas/issues/1277
.. _GH2327: https://github.com/pydata/pandas/issues/2327
-.. _GH2576: https://github.com/pydata/pandas/issues/2576
.. _GH2585: https://github.com/pydata/pandas/issues/2585
.. _GH2604: https://github.com/pydata/pandas/issues/2604
+.. _GH2576: https://github.com/pydata/pandas/issues/2576
.. _GH2616: https://github.com/pydata/pandas/issues/2616
.. _GH2625: https://github.com/pydata/pandas/issues/2625
.. _GH2643: https://github.com/pydata/pandas/issues/2643
+.. _GH2637: https://github.com/pydata/pandas/issues/2637
pandas 0.10.0
=============
diff --git a/doc/source/_static/legacy_0.10.h5 b/doc/source/_static/legacy_0.10.h5
new file mode 100644
index 0000000000000..b1439ef16361a
Binary files /dev/null and b/doc/source/_static/legacy_0.10.h5 differ
diff --git a/doc/source/io.rst b/doc/source/io.rst
index bf9c913909dee..1cbdfaadbcb33 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1106,7 +1106,7 @@ Storing Mixed Types in a Table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Storing mixed-dtype data is supported. Strings are store as a fixed-width using the maximum size of the appended column. Subsequent appends will truncate strings at this length.
-Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set a larger minimum for the string columns. Storing ``floats, strings, ints, bools, datetime64`` are currently supported. For string columns, passing ``nan_rep = 'my_nan_rep'`` to append will change the default nan representation on disk (which converts to/from `np.nan`), this defaults to `nan`.
+Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set a larger minimum for the string columns. Storing ``floats, strings, ints, bools, datetime64`` are currently supported. For string columns, passing ``nan_rep = 'nan'`` to append will change the default nan representation on disk (which converts to/from `np.nan`), this defaults to `nan`.
.. ipython:: python
@@ -1115,9 +1115,6 @@ Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set
df_mixed['int'] = 1
df_mixed['bool'] = True
df_mixed['datetime64'] = Timestamp('20010102')
-
- # make sure that we have datetime64[ns] types
- df_mixed = df_mixed.convert_objects()
df_mixed.ix[3:5,['A','B','string','datetime64']] = np.nan
store.append('df_mixed', df_mixed, min_itemsize = { 'values' : 50 })
@@ -1128,8 +1125,6 @@ Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set
# we have provided a minimum string column size
store.root.df_mixed.table
-It is ok to store ``np.nan`` in a ``float or string``. Make sure to do a ``convert_objects()`` on the frame before storing a ``np.nan`` in a datetime64 column. Storing a column with a ``np.nan`` in a ``int, bool`` will currently throw an ``Exception`` as these columns will have converted to ``object`` type.
-
Storing Multi-Index DataFrames
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -1268,11 +1263,11 @@ To retrieve the *unique* values of an indexable or data column, use the method `
**Table Object**
-If you want to inspect the table object, retrieve via ``get_table``. You could use this progamatically to say get the number of rows in the table.
+If you want to inspect the stored object, retrieve via ``get_storer``. You could use this progamatically to say get the number of rows in an object.
.. ipython:: python
- store.get_table('df_dc').nrows
+ store.get_storer('df_dc').nrows
Multiple Table Queries
~~~~~~~~~~~~~~~~~~~~~~
@@ -1348,7 +1343,7 @@ Or on-the-fly compression (this only applies to tables). You can turn off file c
- ``ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5``
-Furthermore ``ptrepack in.h5 out.h5`` will *repack* the file to allow you to reuse previously deleted space (alternatively, one can simply remove the file and write again).
+Furthermore ``ptrepack in.h5 out.h5`` will *repack* the file to allow you to reuse previously deleted space. Aalternatively, one can simply remove the file and write again, or use the ``copy`` method.
Notes & Caveats
~~~~~~~~~~~~~~~
@@ -1372,10 +1367,28 @@ Notes & Caveats
Compatibility
~~~~~~~~~~~~~
-0.10 of ``HDFStore`` is backwards compatible for reading tables created in a prior version of pandas,
-however, query terms using the prior (undocumented) methodology are unsupported. ``HDFStore`` will issue a warning if you try to use a prior-version format file. You must read in the entire
-file and write it out using the new format to take advantage of the updates. The group attribute ``pandas_version`` contains the version information.
+0.10.1 of ``HDFStore`` is backwards compatible for reading tables created in a prior version of pandas however, query terms using the prior (undocumented) methodology are unsupported. ``HDFStore`` will issue a warning if you try to use a prior-version format file. You must read in the entire file and write it out using the new format, using the method ``copy`` to take advantage of the updates. The group attribute ``pandas_version`` contains the version information. ``copy`` takes a number of options, please see the docstring.
+
+
+ .. ipython:: python
+
+ # a legacy store
+ import os
+ legacy_store = HDFStore('legacy_0.10.h5', 'r')
+ legacy_store
+ # copy (and return the new handle)
+ new_store = legacy_store.copy('store_new.h5')
+ new_store
+ new_store.close()
+
+ .. ipython:: python
+ :suppress:
+
+ legacy_store.close()
+ import os
+ os.remove('store_new.h5')
+
Performance
~~~~~~~~~~~
diff --git a/doc/source/v0.10.1.txt b/doc/source/v0.10.1.txt
index b8137fda540cd..2eb40b2823214 100644
--- a/doc/source/v0.10.1.txt
+++ b/doc/source/v0.10.1.txt
@@ -17,6 +17,9 @@ New features
HDFStore
~~~~~~~~
+You may need to upgrade your existing data files. Please visit the **compatibility** section in the main docs.
+
+
.. ipython:: python
:suppress:
:okexcept:
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 1469620ea01f2..80232689fbdbb 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -4,7 +4,6 @@
"""
# pylint: disable-msg=E1101,W0613,W0603
-
from datetime import datetime, date
import time
import re
@@ -37,47 +36,48 @@
# versioning attribute
_version = '0.10.1'
+class IncompatibilityWarning(Warning): pass
+incompatibility_doc = """
+where criteria is being ignored as this version [%s] is too old (or not-defined),
+read the file in and write it out to a new file to upgrade (with the copy_to method)
+"""
-class IncompatibilityWarning(Warning):
- pass
-
-# reading and writing the full object in one go
+# map object types
_TYPE_MAP = {
- Series: 'series',
- SparseSeries: 'sparse_series',
- TimeSeries: 'series',
- DataFrame: 'frame',
- SparseDataFrame: 'sparse_frame',
- Panel: 'wide',
- Panel4D: 'ndim',
- SparsePanel: 'sparse_panel'
+
+ Series : 'series',
+ SparseSeries : 'sparse_series',
+ TimeSeries : 'series',
+ DataFrame : 'frame',
+ SparseDataFrame : 'sparse_frame',
+ Panel : 'wide',
+ Panel4D : 'ndim',
+ SparsePanel : 'sparse_panel'
}
-_NAME_MAP = {
- 'series': 'Series',
- 'time_series': 'TimeSeries',
- 'sparse_series': 'SparseSeries',
- 'frame': 'DataFrame',
- 'sparse_frame': 'SparseDataFrame',
- 'frame_table': 'DataFrame (Table)',
- 'wide': 'Panel',
- 'sparse_panel': 'SparsePanel',
- 'wide_table': 'Panel (Table)',
- 'long': 'LongPanel',
- # legacy h5 files
- 'Series': 'Series',
- 'TimeSeries': 'TimeSeries',
- 'DataFrame': 'DataFrame',
- 'DataMatrix': 'DataMatrix'
+# storer class map
+_STORER_MAP = {
+ 'TimeSeries' : 'LegacySeriesStorer',
+ 'Series' : 'LegacySeriesStorer',
+ 'DataFrame' : 'LegacyFrameStorer',
+ 'DataMatrix' : 'LegacyFrameStorer',
+ 'series' : 'SeriesStorer',
+ 'sparse_series' : 'SparseSeriesStorer',
+ 'frame' : 'FrameStorer',
+ 'sparse_frame' : 'SparseFrameStorer',
+ 'wide' : 'PanelStorer',
+ 'sparse_panel' : 'SparsePanelStorer',
}
-# legacy handlers
-_LEGACY_MAP = {
- 'Series': 'legacy_series',
- 'TimeSeries': 'legacy_series',
- 'DataFrame': 'legacy_frame',
- 'DataMatrix': 'legacy_frame',
- 'WidePanel': 'wide_table',
+# table class map
+_TABLE_MAP = {
+ 'appendable_frame' : 'AppendableFrameTable',
+ 'appendable_multiframe' : 'AppendableMultiFrameTable',
+ 'appendable_panel' : 'AppendablePanelTable',
+ 'appendable_ndim' : 'AppendableNDimTable',
+ 'worm' : 'WORMTable',
+ 'legacy_frame' : 'LegacyFrameTable',
+ 'legacy_panel' : 'LegacyPanelTable',
}
# axes map
@@ -225,28 +225,17 @@ def __len__(self):
def __repr__(self):
output = '%s\nFile path: %s\n' % (type(self), self.path)
- groups = self.groups()
- if len(groups) > 0:
- keys = []
+ if len(self.keys()):
+ keys = []
values = []
- for n in sorted(groups, key=lambda x: x._v_name):
- kind = getattr(n._v_attrs, 'pandas_type', None)
-
- keys.append(str(n._v_pathname))
-
- # a table
- if _is_table_type(n):
- values.append(str(create_table(self, n)))
-
- # a group
- elif kind is None:
- values.append('unknown type')
- # another type of pandas object
- else:
- values.append(_NAME_MAP[kind])
+ for k in self.keys():
+ s = self.get_storer(k)
+ if s is not None:
+ keys.append(str(s.pathname or k))
+ values.append(str(s or 'invalid_HDFStore node'))
- output += adjoin(5, keys, values)
+ output += adjoin(12, keys, values)
else:
output += 'Empty'
@@ -259,6 +248,15 @@ def keys(self):
"""
return [n._v_pathname for n in self.groups()]
+ def items(self):
+ """
+ iterate on key->group
+ """
+ for g in self.groups():
+ yield g._v_pathname, g
+
+ iteritems = items
+
def open(self, mode='a', warn=True):
"""
Open the file in the specified mode
@@ -359,7 +357,7 @@ def select_as_coordinates(self, key, where=None, **kwargs):
-------------------
where : list of Term (or convertable) objects, optional
"""
- return self.get_table(key).read_coordinates(where=where, **kwargs)
+ return self.get_storer(key).read_coordinates(where = where, **kwargs)
def unique(self, key, column, **kwargs):
"""
@@ -376,7 +374,7 @@ def unique(self, key, column, **kwargs):
raises ValueError if the column can not be extracted indivually (it is part of a data block)
"""
- return self.get_table(key).read_column(column=column, **kwargs)
+ return self.get_storer(key).read_column(column = column, **kwargs)
def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kwargs):
""" Retrieve pandas objects from multiple tables
@@ -408,13 +406,15 @@ def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kw
selector = keys[0]
# collect the tables
- tbls = [self.get_table(k) for k in keys]
+ tbls = [ self.get_storer(k) for k in keys ]
# validate rows
nrows = tbls[0].nrows
for t in tbls:
if t.nrows != nrows:
raise Exception("all tables must have exactly the same nrows!")
+ if not t.is_table:
+ raise Exception("object [%s] is not a table, and cannot be used in all select as multiple" % t.pathname)
# select coordinates from the selector table
c = self.select_as_coordinates(selector, where)
@@ -428,7 +428,7 @@ def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kw
# concat and return
return concat(objs, axis=axis, verify_integrity=True)
- def put(self, key, value, table=False, append=False, **kwargs):
+ def put(self, key, value, table=None, append=False, **kwargs):
"""
Store object in HDFStore
@@ -466,22 +466,31 @@ def remove(self, key, where=None, start=None, stop=None):
number of rows removed (or None if not a Table)
"""
- group = self.get_node(key)
- if group is not None:
+ try:
+ s = self.get_storer(key)
+ except:
- # remove the node
- if where is None:
- group = self.get_node(key)
- group._f_remove(recursive=True)
+ if where is not None:
+ raise Exception("trying to remove a node with a non-None where clause!")
- # delete from the table
- else:
- if not _is_table_type(group):
- raise Exception('can only remove with where on objects written as tables')
- t = create_table(self, group)
- return t.delete(where=where, start=start, stop=stop)
+ # we are actually trying to remove a node (with children)
+ s = self.get_node(key)
+ if s is not None:
+ s._f_remove(recursive=True)
+ return None
- return None
+ if s is None:
+ return None
+
+ # remove the node
+ if where is None:
+ s.group._f_remove(recursive=True)
+
+ # delete from the table
+ else:
+ if not s.is_table:
+ raise Exception('can only remove with where on objects written as tables')
+ return s.delete(where = where, start=start, stop=stop)
def append(self, key, value, columns=None, **kwargs):
"""
@@ -587,17 +596,16 @@ def create_table_index(self, key, **kwargs):
if not _table_supports_index:
raise Exception("PyTables >= 2.3 is required for table indexing")
- group = self.get_node(key)
- if group is None:
- return
+ s = self.get_storer(key)
+ if s is None: return
- if not _is_table_type(group):
+ if not s.is_table:
raise Exception("cannot create table index on a non-table")
- create_table(self, group).create_index(**kwargs)
+ s.create_index(**kwargs)
def groups(self):
- """ return a list of all the groups (that are not themselves a pandas storage object) """
- return [g for g in self.handle.walkGroups() if getattr(g._v_attrs, 'pandas_type', None)]
+ """ return a list of all the top-level nodes (that are not themselves a pandas storage object) """
+ return [ g for g in self.handle.walkGroups() if getattr(g._v_attrs,'pandas_type',None) ]
def get_node(self, key):
""" return the node with the key or None if it does not exist """
@@ -608,432 +616,164 @@ def get_node(self, key):
except:
return None
- def get_table(self, key):
- """ return the table object for a key, raise if not in the file or a non-table """
- group = self.get_node(key)
- if group is None:
- raise KeyError('No object named %s in the file' % key)
- if not _is_table_type(group):
- raise Exception("cannot return a table object for a non-table")
- t = create_table(self, group)
- t.infer_axes()
- return t
-
- ###### private methods ######
-
- def _get_handler(self, op, kind):
- return getattr(self, '_%s_%s' % (op, kind))
-
- def _write_to_group(self, key, value, table=False, append=False,
- complib=None, **kwargs):
+ def get_storer(self, key):
+ """ return the storer object for a key, raise if not in the file """
group = self.get_node(key)
- if group is None:
- paths = key.split('/')
-
- # recursively create the groups
- path = '/'
- for p in paths:
- if not len(p):
- continue
- new_path = path
- if not path.endswith('/'):
- new_path += '/'
- new_path += p
- group = self.get_node(new_path)
- if group is None:
- group = self.handle.createGroup(path, p)
- path = new_path
-
- kind = _TYPE_MAP[type(value)]
- if table or (append and _is_table_type(group)):
- kind = '%s_table' % kind
- handler = self._get_handler(op='write', kind=kind)
- wrapper = lambda value: handler(group, value, append=append,
- complib=complib, **kwargs)
- else:
- if append:
- raise ValueError('Can only append to Tables')
- if complib:
- raise ValueError('Compression only supported on Tables')
-
- handler = self._get_handler(op='write', kind=kind)
- wrapper = lambda value: handler(group, value)
-
- group._v_attrs.pandas_type = kind
- group._v_attrs.pandas_version = _version
- wrapper(value)
-
- def _write_series(self, group, series):
- self._write_index(group, 'index', series.index)
- self._write_array(group, 'values', series.values)
- group._v_attrs.name = series.name
-
- def _write_sparse_series(self, group, series):
- self._write_index(group, 'index', series.index)
- self._write_index(group, 'sp_index', series.sp_index)
- self._write_array(group, 'sp_values', series.sp_values)
- group._v_attrs.name = series.name
- group._v_attrs.fill_value = series.fill_value
- group._v_attrs.kind = series.kind
-
- def _read_sparse_series(self, group, where=None):
- index = self._read_index(group, 'index')
- sp_values = _read_array(group, 'sp_values')
- sp_index = self._read_index(group, 'sp_index')
- name = getattr(group._v_attrs, 'name', None)
- fill_value = getattr(group._v_attrs, 'fill_value', None)
- kind = getattr(group._v_attrs, 'kind', 'block')
- return SparseSeries(sp_values, index=index, sparse_index=sp_index,
- kind=kind, fill_value=fill_value,
- name=name)
-
- def _write_sparse_frame(self, group, sdf):
- for name, ss in sdf.iteritems():
- key = 'sparse_series_%s' % name
- if key not in group._v_children:
- node = self.handle.createGroup(group, key)
- else:
- node = getattr(group, key)
- self._write_sparse_series(node, ss)
- setattr(group._v_attrs, 'default_fill_value',
- sdf.default_fill_value)
- setattr(group._v_attrs, 'default_kind',
- sdf.default_kind)
- self._write_index(group, 'columns', sdf.columns)
-
- def _read_sparse_frame(self, group, where=None):
- columns = self._read_index(group, 'columns')
- sdict = {}
- for c in columns:
- key = 'sparse_series_%s' % c
- node = getattr(group, key)
- sdict[c] = self._read_sparse_series(node)
- default_kind = getattr(group._v_attrs, 'default_kind')
- default_fill_value = getattr(group._v_attrs, 'default_fill_value')
- return SparseDataFrame(sdict, columns=columns,
- default_kind=default_kind,
- default_fill_value=default_fill_value)
-
- def _write_sparse_panel(self, group, swide):
- setattr(group._v_attrs, 'default_fill_value', swide.default_fill_value)
- setattr(group._v_attrs, 'default_kind', swide.default_kind)
- self._write_index(group, 'items', swide.items)
-
- for name, sdf in swide.iteritems():
- key = 'sparse_frame_%s' % name
- if key not in group._v_children:
- node = self.handle.createGroup(group, key)
- else:
- node = getattr(group, key)
- self._write_sparse_frame(node, sdf)
-
- def _read_sparse_panel(self, group, where=None):
- default_fill_value = getattr(group._v_attrs, 'default_fill_value')
- default_kind = getattr(group._v_attrs, 'default_kind')
- items = self._read_index(group, 'items')
-
- sdict = {}
- for name in items:
- key = 'sparse_frame_%s' % name
- node = getattr(group, key)
- sdict[name] = self._read_sparse_frame(node)
- return SparsePanel(sdict, items=items, default_kind=default_kind,
- default_fill_value=default_fill_value)
-
- def _write_frame(self, group, df):
- self._write_block_manager(group, df._data)
-
- def _read_frame(self, group, where=None, **kwargs):
- return DataFrame(self._read_block_manager(group))
-
- def _write_block_manager(self, group, data):
- if not data.is_consolidated():
- data = data.consolidate()
-
- group._v_attrs.ndim = data.ndim
- for i, ax in enumerate(data.axes):
- self._write_index(group, 'axis%d' % i, ax)
-
- # Supporting mixed-type DataFrame objects...nontrivial
- nblocks = len(data.blocks)
- group._v_attrs.nblocks = nblocks
- for i in range(nblocks):
- blk = data.blocks[i]
- # I have no idea why, but writing values before items fixed #2299
- self._write_array(group, 'block%d_values' % i, blk.values)
- self._write_index(group, 'block%d_items' % i, blk.items)
-
- def _read_block_manager(self, group):
- ndim = group._v_attrs.ndim
-
- axes = []
- for i in xrange(ndim):
- ax = self._read_index(group, 'axis%d' % i)
- axes.append(ax)
-
- items = axes[0]
- blocks = []
- for i in range(group._v_attrs.nblocks):
- blk_items = self._read_index(group, 'block%d_items' % i)
- values = _read_array(group, 'block%d_values' % i)
- blk = make_block(values, blk_items, items)
- blocks.append(blk)
-
- return BlockManager(blocks, axes)
-
- def _write_wide(self, group, panel):
- panel._consolidate_inplace()
- self._write_block_manager(group, panel._data)
-
- def _read_wide(self, group, where=None, **kwargs):
- return Panel(self._read_block_manager(group))
-
- def _write_ndim_table(self, group, obj, append=False, axes=None, index=True, **kwargs):
- if axes is None:
- axes = _AXES_MAP[type(obj)]
- t = create_table(self, group, typ='appendable_ndim')
- t.write(axes=axes, obj=obj, append=append, **kwargs)
- if index:
- t.create_index(columns=index)
-
- def _read_ndim_table(self, group, where=None, **kwargs):
- t = create_table(self, group, **kwargs)
- return t.read(where, **kwargs)
-
- def _write_frame_table(self, group, df, append=False, axes=None, index=True, **kwargs):
- if axes is None:
- axes = _AXES_MAP[type(df)]
-
- t = create_table(self, group, typ='appendable_frame' if df.index.nlevels == 1 else 'appendable_multiframe')
- t.write(axes=axes, obj=df, append=append, **kwargs)
- if index:
- t.create_index(columns=index)
-
- _read_frame_table = _read_ndim_table
-
- def _write_wide_table(self, group, panel, append=False, axes=None, index=True, **kwargs):
- if axes is None:
- axes = _AXES_MAP[type(panel)]
- t = create_table(self, group, typ='appendable_panel')
- t.write(axes=axes, obj=panel, append=append, **kwargs)
- if index:
- t.create_index(columns=index)
-
- _read_wide_table = _read_ndim_table
-
- def _write_index(self, group, key, index):
- if isinstance(index, MultiIndex):
- setattr(group._v_attrs, '%s_variety' % key, 'multi')
- self._write_multi_index(group, key, index)
- elif isinstance(index, BlockIndex):
- setattr(group._v_attrs, '%s_variety' % key, 'block')
- self._write_block_index(group, key, index)
- elif isinstance(index, IntIndex):
- setattr(group._v_attrs, '%s_variety' % key, 'sparseint')
- self._write_sparse_intindex(group, key, index)
- else:
- setattr(group._v_attrs, '%s_variety' % key, 'regular')
- converted = _convert_index(index).set_name('index')
- self._write_array(group, key, converted.values)
- node = getattr(group, key)
- node._v_attrs.kind = converted.kind
- node._v_attrs.name = index.name
-
- if isinstance(index, (DatetimeIndex, PeriodIndex)):
- node._v_attrs.index_class = _class_to_alias(type(index))
-
- if hasattr(index, 'freq'):
- node._v_attrs.freq = index.freq
-
- if hasattr(index, 'tz') and index.tz is not None:
- zone = tslib.get_timezone(index.tz)
- if zone is None:
- zone = tslib.tot_seconds(index.tz.utcoffset())
- node._v_attrs.tz = zone
-
- def _read_index(self, group, key):
- variety = getattr(group._v_attrs, '%s_variety' % key)
-
- if variety == 'multi':
- return self._read_multi_index(group, key)
- elif variety == 'block':
- return self._read_block_index(group, key)
- elif variety == 'sparseint':
- return self._read_sparse_intindex(group, key)
- elif variety == 'regular':
- _, index = self._read_index_node(getattr(group, key))
- return index
- else: # pragma: no cover
- raise Exception('unrecognized index variety: %s' % variety)
-
- def _write_block_index(self, group, key, index):
- self._write_array(group, '%s_blocs' % key, index.blocs)
- self._write_array(group, '%s_blengths' % key, index.blengths)
- setattr(group._v_attrs, '%s_length' % key, index.length)
-
- def _read_block_index(self, group, key):
- length = getattr(group._v_attrs, '%s_length' % key)
- blocs = _read_array(group, '%s_blocs' % key)
- blengths = _read_array(group, '%s_blengths' % key)
- return BlockIndex(length, blocs, blengths)
-
- def _write_sparse_intindex(self, group, key, index):
- self._write_array(group, '%s_indices' % key, index.indices)
- setattr(group._v_attrs, '%s_length' % key, index.length)
-
- def _read_sparse_intindex(self, group, key):
- length = getattr(group._v_attrs, '%s_length' % key)
- indices = _read_array(group, '%s_indices' % key)
- return IntIndex(length, indices)
+ if group is None:
+ return None
+ s = self._create_storer(group)
+ s.infer_axes()
+ return s
- def _write_multi_index(self, group, key, index):
- setattr(group._v_attrs, '%s_nlevels' % key, index.nlevels)
+ def copy(self, file, mode = 'w', propindexes = True, keys = None, complib = None, complevel = None, fletcher32 = False, overwrite = True):
+ """ copy the existing store to a new file, upgrading in place
- for i, (lev, lab, name) in enumerate(zip(index.levels,
- index.labels,
- index.names)):
- # write the level
- level_key = '%s_level%d' % (key, i)
- conv_level = _convert_index(lev).set_name(level_key)
- self._write_array(group, level_key, conv_level.values)
- node = getattr(group, level_key)
- node._v_attrs.kind = conv_level.kind
- node._v_attrs.name = name
+ Parameters
+ ----------
+ propindexes: restore indexes in copied file (defaults to True)
+ keys : list of keys to include in the copy (defaults to all)
+ overwrite : overwrite (remove and replace) existing nodes in the new store (default is True)
+ mode, complib, complevel, fletcher32 same as in HDFStore.__init__
- # write the name
- setattr(node._v_attrs, '%s_name%d' % (key, i), name)
+ Returns
+ -------
+ open file handle of the new store
- # write the labels
- label_key = '%s_label%d' % (key, i)
- self._write_array(group, label_key, lab)
+ """
+ new_store = HDFStore(file, mode = mode, complib = complib, complevel = complevel, fletcher32 = fletcher32)
+ if keys is None:
+ keys = self.keys()
+ if not isinstance(keys, (tuple,list)):
+ keys = [ keys ]
+ for k in keys:
+ s = self.get_storer(k)
+ if s is not None:
- def _read_multi_index(self, group, key):
- nlevels = getattr(group._v_attrs, '%s_nlevels' % key)
+ if k in new_store:
+ if overwrite:
+ new_store.remove(k)
- levels = []
- labels = []
- names = []
- for i in range(nlevels):
- level_key = '%s_level%d' % (key, i)
- name, lev = self._read_index_node(getattr(group, level_key))
- levels.append(lev)
- names.append(name)
+ data = self.select(k)
+ if s.is_table:
- label_key = '%s_label%d' % (key, i)
- lab = _read_array(group, label_key)
- labels.append(lab)
+ index = False
+ if propindexes:
+ index = [ a.name for a in s.axes if a.is_indexed ]
+ new_store.append(k,data, index=index, data_columns=getattr(s,'data_columns',None))
+ else:
+ new_store.put(k,data)
- return MultiIndex(levels=levels, labels=labels, names=names)
+ return new_store
- def _read_index_node(self, node):
- data = node[:]
- kind = node._v_attrs.kind
- name = None
+ ###### private methods ######
- if 'name' in node._v_attrs:
- name = node._v_attrs.name
+ def _create_storer(self, group, value = None, table = False, append = False, **kwargs):
+ """ return a suitable Storer class to operate """
- index_class = _alias_to_class(getattr(node._v_attrs,
- 'index_class', ''))
- factory = _get_index_factory(index_class)
+ def error(t):
+ raise Exception("cannot properly create the storer for: [%s] [group->%s,value->%s,table->%s,append->%s,kwargs->%s]" %
+ (t,group,type(value),table,append,kwargs))
+
+ pt = getattr(group._v_attrs,'pandas_type',None)
+ tt = getattr(group._v_attrs,'table_type',None)
- kwargs = {}
- if 'freq' in node._v_attrs:
- kwargs['freq'] = node._v_attrs['freq']
+ # infer the pt from the passed value
+ if pt is None:
+ if value is None:
+ raise Exception("cannot create a storer if the object is not existing nor a value are passed")
- if 'tz' in node._v_attrs:
- kwargs['tz'] = node._v_attrs['tz']
+ try:
+ pt = _TYPE_MAP[type(value)]
+ except:
+ error('_TYPE_MAP')
- if kind in ('date', 'datetime'):
- index = factory(_unconvert_index(data, kind), dtype=object,
- **kwargs)
- else:
- index = factory(_unconvert_index(data, kind), **kwargs)
+ # we are actually a table
+ if table or append:
+ pt += '_table'
- index.name = name
+ # a storer node
+ if 'table' not in pt:
+ try:
+ return globals()[_STORER_MAP[pt]](self, group, **kwargs)
+ except:
+ error('_STORER_MAP')
- return name, index
+ # existing node (and must be a table)
+ if tt is None:
- def _write_array(self, group, key, value):
- if key in group:
- self.handle.removeNode(group, key)
+ # if we are a writer, determin the tt
+ if value is not None:
- # Transform needed to interface with pytables row/col notation
- empty_array = any(x == 0 for x in value.shape)
- transposed = False
+ if pt == 'frame_table':
+ tt = 'appendable_frame' if value.index.nlevels == 1 else 'appendable_multiframe'
+ elif pt == 'wide_table':
+ tt = 'appendable_panel'
+ elif pt == 'ndim_table':
+ tt = 'appendable_ndim'
- if not empty_array:
- value = value.T
- transposed = True
+ else:
+
+ # distiguish between a frame/table
+ tt = 'legacy_panel'
+ try:
+ fields = group.table._v_attrs.fields
+ if len(fields) == 1 and fields[0] == 'value':
+ tt = 'legacy_frame'
+ except:
+ pass
- if self.filters is not None:
- atom = None
- try:
- # get the atom for this datatype
- atom = _tables().Atom.from_dtype(value.dtype)
- except ValueError:
- pass
+ try:
+ return globals()[_TABLE_MAP[tt]](self, group, **kwargs)
+ except:
+ error('_TABLE_MAP')
- if atom is not None:
- # create an empty chunked array and fill it from value
- ca = self.handle.createCArray(group, key, atom,
- value.shape,
- filters=self.filters)
- ca[:] = value
- getattr(group, key)._v_attrs.transposed = transposed
- return
+ def _write_to_group(self, key, value, index=True, table=False, append=False, complib=None, **kwargs):
+ group = self.get_node(key)
- if value.dtype.type == np.object_:
- vlarr = self.handle.createVLArray(group, key,
- _tables().ObjectAtom())
- vlarr.append(value)
- elif value.dtype.type == np.datetime64:
- self.handle.createArray(group, key, value.view('i8'))
- getattr(group, key)._v_attrs.value_type = 'datetime64'
- else:
- if empty_array:
- # ugly hack for length 0 axes
- arr = np.empty((1,) * value.ndim)
- self.handle.createArray(group, key, arr)
- getattr(group, key)._v_attrs.value_type = str(value.dtype)
- getattr(group, key)._v_attrs.shape = value.shape
- else:
- self.handle.createArray(group, key, value)
+ # remove the node if we are not appending
+ if group is not None and not append:
+ self.handle.removeNode(group, recursive=True)
+ group = None
- getattr(group, key)._v_attrs.transposed = transposed
+ if group is None:
+ paths = key.split('/')
- def _read_group(self, group, where=None, **kwargs):
- kind = group._v_attrs.pandas_type
- kind = _LEGACY_MAP.get(kind, kind)
- handler = self._get_handler(op='read', kind=kind)
- return handler(group, where=where, **kwargs)
+ # recursively create the groups
+ path = '/'
+ for p in paths:
+ if not len(p):
+ continue
+ new_path = path
+ if not path.endswith('/'):
+ new_path += '/'
+ new_path += p
+ group = self.get_node(new_path)
+ if group is None:
+ group = self.handle.createGroup(path, p)
+ path = new_path
- def _read_series(self, group, where=None, **kwargs):
- index = self._read_index(group, 'index')
- if len(index) > 0:
- values = _read_array(group, 'values')
+ s = self._create_storer(group, value, table=table, append=append, **kwargs)
+ if append:
+ # raise if we are trying to append to a non-table,
+ # or a table that exists (and we are putting)
+ if not s.is_table or (s.is_table and table is None and s.is_exists):
+ raise ValueError('Can only append to Tables')
+ if not s.is_exists:
+ s.set_info()
else:
- values = []
+ s.set_info()
- name = getattr(group._v_attrs, 'name', None)
- return Series(values, index=index, name=name)
-
- def _read_legacy_series(self, group, where=None, **kwargs):
- index = self._read_index_legacy(group, 'index')
- values = _read_array(group, 'values')
- return Series(values, index=index)
+ if not s.is_table and complib:
+ raise ValueError('Compression not supported on non-table')
- def _read_legacy_frame(self, group, where=None, **kwargs):
- index = self._read_index_legacy(group, 'index')
- columns = self._read_index_legacy(group, 'columns')
- values = _read_array(group, 'values')
- return DataFrame(values, index=index, columns=columns)
+ s.write(obj = value, append=append, complib=complib, **kwargs)
+ if s.is_table and index:
+ s.create_index(columns = index)
- def _read_index_legacy(self, group, key):
- node = getattr(group, key)
- data = node[:]
- kind = node._v_attrs.kind
- return _unconvert_index_legacy(data, kind)
+ def _read_group(self, group, **kwargs):
+ s = self._create_storer(group)
+ s.infer_axes()
+ return s.read(**kwargs)
class IndexCol(object):
@@ -1109,6 +849,14 @@ def __eq__(self, other):
def __ne__(self, other):
return not self.__eq__(other)
+ @property
+ def is_indexed(self):
+ """ return whether I am an indexed column """
+ try:
+ return getattr(self.table.cols,self.cname).is_indexed
+ except:
+ False
+
def copy(self):
new_self = copy.copy(self)
return new_self
@@ -1442,10 +1190,9 @@ def get_atom_data(self, block):
def get_atom_datetime64(self, block):
return _tables().Int64Col()
-
-class Table(object):
- """ represent a table:
- facilitate read/write of various types of tables
+class Storer(object):
+ """ represent an object in my store
+ facilitate read/write of various types of objects
this is an abstract base class
Parameters
@@ -1453,31 +1200,24 @@ class Table(object):
parent : my parent HDFStore
group : the group node where the table resides
-
- Attrs in Table Node
- -------------------
- These are attributes that are store in the main table node, they are necessary
- to recreate these tables when read back in.
-
- index_axes : a list of tuples of the (original indexing axis and index column)
- non_index_axes: a list of tuples of the (original index axis and columns on a non-indexing axis)
- values_axes : a list of the columns which comprise the data of this table
- data_columns : a list of the columns that we are allowing indexing (these become single columns in values_axes)
- nan_rep : the string to use for nan representations for string objects
- levels : the names of levels
-
- """
- table_type = None
- obj_type = None
- ndim = None
- levels = 1
+ """
+ pandas_kind = None
+ obj_type = None
+ ndim = None
+ is_table = False
def __init__(self, parent, group, **kwargs):
- self.parent = parent
- self.group = group
+ self.parent = parent
+ self.group = group
+ self.set_version()
+
+ @property
+ def is_old_version(self):
+ return self.version[0] <= 0 and self.version[1] <= 10 and self.version[2] < 1
- # compute our version
- version = getattr(group._v_attrs, 'pandas_version', None)
+ def set_version(self):
+ """ compute and set our version """
+ version = getattr(self.group._v_attrs,'pandas_version',None)
try:
self.version = tuple([int(x) for x in version.split('.')])
if len(self.version) == 2:
@@ -1485,17 +1225,6 @@ def __init__(self, parent, group, **kwargs):
except:
self.version = (0, 0, 0)
- self.index_axes = []
- self.non_index_axes = []
- self.values_axes = []
- self.data_columns = []
- self.nan_rep = None
- self.selection = None
-
- @property
- def table_type_short(self):
- return self.table_type.split('_')[0]
-
@property
def pandas_type(self):
return getattr(self.group._v_attrs, 'pandas_type', None)
@@ -1503,47 +1232,31 @@ def pandas_type(self):
def __repr__(self):
""" return a pretty representatgion of myself """
self.infer_axes()
- dc = ",dc->[%s]" % ','.join(
- self.data_columns) if len(self.data_columns) else ''
- return "%s (typ->%s,nrows->%s,indexers->[%s]%s)" % (self.pandas_type,
- self.table_type_short,
- self.nrows,
- ','.join([a.name for a in self.index_axes]),
- dc)
+ s = self.shape
+ if s is not None:
+ return "%-12.12s (shape->%s)" % (self.pandas_type,s)
+ return self.pandas_type
+
+ def __str__(self):
+ return self.__repr__()
- __str__ = __repr__
+ def set_info(self):
+ """ set my pandas type & version """
+ self.attrs.pandas_type = self.pandas_kind
+ self.attrs.pandas_version = _version
+ self.set_version()
def copy(self):
new_self = copy.copy(self)
return new_self
- def validate(self, other):
- """ validate against an existing table """
- if other is None:
- return
-
- if other.table_type != self.table_type:
- raise TypeError("incompatible table_type with existing [%s - %s]" %
- (other.table_type, self.table_type))
-
- for c in ['index_axes', 'non_index_axes', 'values_axes']:
- if getattr(self, c, None) != getattr(other, c, None):
- raise Exception("invalid combinate of [%s] on appending data [%s] vs current table [%s]"
- % (c, getattr(self, c, None), getattr(other, c, None)))
-
@property
- def nrows(self):
- return getattr(self.table, 'nrows', None)
-
- @property
- def nrows_expected(self):
- """ based on our axes, compute the expected nrows """
- return np.prod([i.cvalues.shape[0] for i in self.index_axes])
+ def shape(self):
+ return self.nrows
@property
- def table(self):
- """ return the table group """
- return getattr(self.group, 'table', None)
+ def pathname(self):
+ return self.group._v_pathname
@property
def handle(self):
@@ -1573,6 +1286,604 @@ def complib(self):
def attrs(self):
return self.group._v_attrs
+ def set_attrs(self):
+ """ set our object attributes """
+ pass
+
+ def get_attrs(self):
+ """ get our object attributes """
+ pass
+
+ @property
+ def storable(self):
+ """ return my storable """
+ return self.group
+
+ @property
+ def is_exists(self):
+ return False
+
+ @property
+ def nrows(self):
+ return getattr(self.storable,'nrows',None)
+
+ def validate(self, other):
+ """ validate against an existing storable """
+ if other is None: return
+ return True
+
+ def validate_version(self, where = None):
+ """ are we trying to operate on an old version? """
+ return True
+
+ def infer_axes(self):
+ """ infer the axes of my storer
+ return a boolean indicating if we have a valid storer or not """
+
+ s = self.storable
+ if s is None:
+ return False
+ self.get_attrs()
+ return True
+
+ def read(self, **kwargs):
+ raise NotImplementedError("cannot read on an abstract storer: subclasses should implement")
+
+ def write(self, **kwargs):
+ raise NotImplementedError("cannot write on an abstract storer: sublcasses should implement")
+
+ def delete(self, where = None, **kwargs):
+ """ support fully deleting the node in its entirety (only) - where specification must be None """
+ if where is None:
+ self.handle.removeNode(self.group, recursive=True)
+ return None
+
+ raise NotImplementedError("cannot delete on an abstract storer")
+
+class GenericStorer(Storer):
+ """ a generified storer version """
+ _index_type_map = { DatetimeIndex: 'datetime',
+ PeriodIndex: 'period'}
+ _reverse_index_map = dict([ (v,k) for k, v in _index_type_map.iteritems() ])
+ attributes = []
+
+ # indexer helpders
+ def _class_to_alias(self, cls):
+ return self._index_type_map.get(cls, '')
+
+ def _alias_to_class(self, alias):
+ if isinstance(alias, type): # pragma: no cover
+ return alias # compat: for a short period of time master stored types
+ return self._reverse_index_map.get(alias, Index)
+
+ def _get_index_factory(self, klass):
+ if klass == DatetimeIndex:
+ def f(values, freq=None, tz=None):
+ return DatetimeIndex._simple_new(values, None, freq=freq,
+ tz=tz)
+ return f
+ return klass
+
+ @property
+ def is_exists(self):
+ return True
+
+ def get_attrs(self):
+ """ retrieve our attributes """
+ for n in self.attributes:
+ setattr(self,n,getattr(self.attrs, n, None))
+
+ def read_array(self, key):
+ """ read an array for the specified node (off of group """
+ import tables
+ node = getattr(self.group, key)
+ data = node[:]
+ attrs = node._v_attrs
+
+ transposed = getattr(attrs, 'transposed', False)
+
+ if isinstance(node, tables.VLArray):
+ ret = data[0]
+ else:
+ dtype = getattr(attrs, 'value_type', None)
+ shape = getattr(attrs, 'shape', None)
+
+ if shape is not None:
+ # length 0 axis
+ ret = np.empty(shape, dtype=dtype)
+ else:
+ ret = data
+
+ if dtype == 'datetime64':
+ ret = np.array(ret, dtype='M8[ns]')
+
+ if transposed:
+ return ret.T
+ else:
+ return ret
+
+ def read_index(self, key):
+ variety = getattr(self.attrs, '%s_variety' % key)
+
+ if variety == 'multi':
+ return self.read_multi_index(key)
+ elif variety == 'block':
+ return self.read_block_index(key)
+ elif variety == 'sparseint':
+ return self.read_sparse_intindex(key)
+ elif variety == 'regular':
+ _, index = self.read_index_node(getattr(self.group, key))
+ return index
+ else: # pragma: no cover
+ raise Exception('unrecognized index variety: %s' % variety)
+
+ def write_index(self, key, index):
+ if isinstance(index, MultiIndex):
+ setattr(self.attrs, '%s_variety' % key, 'multi')
+ self.write_multi_index(key, index)
+ elif isinstance(index, BlockIndex):
+ setattr(self.attrs, '%s_variety' % key, 'block')
+ self.write_block_index(key, index)
+ elif isinstance(index, IntIndex):
+ setattr(self.attrs, '%s_variety' % key, 'sparseint')
+ self.write_sparse_intindex(key, index)
+ else:
+ setattr(self.attrs, '%s_variety' % key, 'regular')
+ converted = _convert_index(index).set_name('index')
+ self.write_array(key, converted.values)
+ node = getattr(self.group, key)
+ node._v_attrs.kind = converted.kind
+ node._v_attrs.name = index.name
+
+ if isinstance(index, (DatetimeIndex, PeriodIndex)):
+ node._v_attrs.index_class = self._class_to_alias(type(index))
+
+ if hasattr(index, 'freq'):
+ node._v_attrs.freq = index.freq
+
+ if hasattr(index, 'tz') and index.tz is not None:
+ zone = tslib.get_timezone(index.tz)
+ if zone is None:
+ zone = tslib.tot_seconds(index.tz.utcoffset())
+ node._v_attrs.tz = zone
+
+
+ def write_block_index(self, key, index):
+ self.write_array('%s_blocs' % key, index.blocs)
+ self.write_array('%s_blengths' % key, index.blengths)
+ setattr(self.attrs, '%s_length' % key, index.length)
+
+ def read_block_index(self, key):
+ length = getattr(self.attrs, '%s_length' % key)
+ blocs = self.read_array('%s_blocs' % key)
+ blengths = self.read_array('%s_blengths' % key)
+ return BlockIndex(length, blocs, blengths)
+
+ def write_sparse_intindex(self, key, index):
+ self.write_array('%s_indices' % key, index.indices)
+ setattr(self.attrs, '%s_length' % key, index.length)
+
+ def read_sparse_intindex(self, key):
+ length = getattr(self.attrs, '%s_length' % key)
+ indices = self.read_array('%s_indices' % key)
+ return IntIndex(length, indices)
+
+ def write_multi_index(self, key, index):
+ setattr(self.attrs, '%s_nlevels' % key, index.nlevels)
+
+ for i, (lev, lab, name) in enumerate(zip(index.levels,
+ index.labels,
+ index.names)):
+ # write the level
+ level_key = '%s_level%d' % (key, i)
+ conv_level = _convert_index(lev).set_name(level_key)
+ self.write_array(level_key, conv_level.values)
+ node = getattr(self.group, level_key)
+ node._v_attrs.kind = conv_level.kind
+ node._v_attrs.name = name
+
+ # write the name
+ setattr(node._v_attrs, '%s_name%d' % (key, i), name)
+
+ # write the labels
+ label_key = '%s_label%d' % (key, i)
+ self.write_array(label_key, lab)
+
+ def read_multi_index(self, key):
+ nlevels = getattr(self.attrs, '%s_nlevels' % key)
+
+ levels = []
+ labels = []
+ names = []
+ for i in range(nlevels):
+ level_key = '%s_level%d' % (key, i)
+ name, lev = self.read_index_node(getattr(self.group, level_key))
+ levels.append(lev)
+ names.append(name)
+
+ label_key = '%s_label%d' % (key, i)
+ lab = self.read_array(label_key)
+ labels.append(lab)
+
+ return MultiIndex(levels=levels, labels=labels, names=names)
+
+ def read_index_node(self, node):
+ data = node[:]
+ kind = node._v_attrs.kind
+ name = None
+
+ if 'name' in node._v_attrs:
+ name = node._v_attrs.name
+
+ index_class = self._alias_to_class(getattr(node._v_attrs,
+ 'index_class', ''))
+ factory = self._get_index_factory(index_class)
+
+ kwargs = {}
+ if 'freq' in node._v_attrs:
+ kwargs['freq'] = node._v_attrs['freq']
+
+ if 'tz' in node._v_attrs:
+ kwargs['tz'] = node._v_attrs['tz']
+
+ if kind in ('date', 'datetime'):
+ index = factory(_unconvert_index(data, kind), dtype=object,
+ **kwargs)
+ else:
+ index = factory(_unconvert_index(data, kind), **kwargs)
+
+ index.name = name
+
+ return name, index
+
+ def write_array(self, key, value):
+ if key in self.group:
+ self.handle.removeNode(self.group, key)
+
+ # Transform needed to interface with pytables row/col notation
+ empty_array = any(x == 0 for x in value.shape)
+ transposed = False
+
+ if not empty_array:
+ value = value.T
+ transposed = True
+
+ if self.filters is not None:
+ atom = None
+ try:
+ # get the atom for this datatype
+ atom = _tables().Atom.from_dtype(value.dtype)
+ except ValueError:
+ pass
+
+ if atom is not None:
+ # create an empty chunked array and fill it from value
+ ca = self.handle.createCArray(self.group, key, atom,
+ value.shape,
+ filters=self.filters)
+ ca[:] = value
+ getattr(self.group, key)._v_attrs.transposed = transposed
+ return
+
+ if value.dtype.type == np.object_:
+ vlarr = self.handle.createVLArray(self.group, key,
+ _tables().ObjectAtom())
+ vlarr.append(value)
+ elif value.dtype.type == np.datetime64:
+ self.handle.createArray(self.group, key, value.view('i8'))
+ getattr(self.group, key)._v_attrs.value_type = 'datetime64'
+ else:
+ if empty_array:
+ # ugly hack for length 0 axes
+ arr = np.empty((1,) * value.ndim)
+ self.handle.createArray(self.group, key, arr)
+ getattr(self.group, key)._v_attrs.value_type = str(value.dtype)
+ getattr(self.group, key)._v_attrs.shape = value.shape
+ else:
+ self.handle.createArray(self.group, key, value)
+
+ getattr(self.group, key)._v_attrs.transposed = transposed
+
+class LegacyStorer(GenericStorer):
+
+ def read_index_legacy(self, key):
+ node = getattr(self.group,key)
+ data = node[:]
+ kind = node._v_attrs.kind
+ return _unconvert_index_legacy(data, kind)
+
+class LegacySeriesStorer(LegacyStorer):
+
+ def read(self, **kwargs):
+ index = self.read_index_legacy('index')
+ values = self.read_array('values')
+ return Series(values, index=index)
+
+class LegacyFrameStorer(LegacyStorer):
+
+ def read(self, **kwargs):
+ index = self.read_index_legacy('index')
+ columns = self.read_index_legacy('columns')
+ values = self.read_array('values')
+ return DataFrame(values, index=index, columns=columns)
+
+class SeriesStorer(GenericStorer):
+ pandas_kind = 'series'
+ attributes = ['name']
+
+ @property
+ def shape(self):
+ try:
+ return "[%s]" % len(getattr(self.group,'values',None))
+ except:
+ return None
+
+ def read(self, **kwargs):
+ index = self.read_index('index')
+ if len(index) > 0:
+ values = self.read_array('values')
+ else:
+ values = []
+
+ return Series(values, index=index, name=self.name)
+
+ def write(self, obj, **kwargs):
+ self.write_index('index', obj.index)
+ self.write_array('values', obj.values)
+ self.attrs.name = obj.name
+
+class SparseSeriesStorer(GenericStorer):
+ pandas_kind = 'sparse_series'
+ attributes = ['name','fill_value','kind']
+
+ def read(self, **kwargs):
+ index = self.read_index('index')
+ sp_values = self.read_array('sp_values')
+ sp_index = self.read_index('sp_index')
+ return SparseSeries(sp_values, index=index, sparse_index=sp_index,
+ kind=self.kind or 'block', fill_value=self.fill_value,
+ name=self.name)
+
+ def write(self, obj, **kwargs):
+ self.write_index('index', obj.index)
+ self.write_index('sp_index', obj.sp_index)
+ self.write_array('sp_values', obj.sp_values)
+ self.attrs.name = obj.name
+ self.attrs.dill_value = obj.fill_value
+ self.attrs.kind = obj.kind
+
+class SparseFrameStorer(GenericStorer):
+ pandas_kind = 'sparse_frame'
+ attributes = ['default_kind','default_fill_value']
+
+ def read(self, **kwargs):
+ columns = self.read_index('columns')
+ sdict = {}
+ for c in columns:
+ key = 'sparse_series_%s' % c
+ s = SparseSeriesStorer(self.parent, getattr(self.group,key))
+ s.infer_axes()
+ sdict[c] = s.read()
+ return SparseDataFrame(sdict, columns=columns,
+ default_kind=self.default_kind,
+ default_fill_value=self.default_fill_value)
+
+ def write(self, obj, **kwargs):
+ """ write it as a collection of individual sparse series """
+ for name, ss in obj.iteritems():
+ key = 'sparse_series_%s' % name
+ if key not in self.group._v_children:
+ node = self.handle.createGroup(self.group, key)
+ else:
+ node = getattr(self.group, key)
+ s = SparseSeriesStorer(self.parent, node)
+ s.write(ss)
+ self.attrs.default_fill_value = obj.default_fill_value
+ self.attrs.default_kind = obj.default_kind
+ self.write_index('columns', obj.columns)
+
+class SparsePanelStorer(GenericStorer):
+ pandas_kind = 'sparse_panel'
+ attributes = ['default_kind','default_fill_value']
+
+ def read(self, **kwargs):
+ items = self.read_index('items')
+
+ sdict = {}
+ for name in items:
+ key = 'sparse_frame_%s' % name
+ node = getattr(self.group, key)
+ s = SparseFrameStorer(self.parent, getattr(self.group,key))
+ s.infer_axes()
+ sdict[name] = s.read()
+ return SparsePanel(sdict, items=items, default_kind=self.default_kind,
+ default_fill_value=self.default_fill_value)
+
+ def write(self, obj, **kwargs):
+ self.attrs.default_fill_value = obj.default_fill_value
+ self.attrs.default_kind = obj.default_kind
+ self.write_index('items', obj.items)
+
+ for name, sdf in obj.iteritems():
+ key = 'sparse_frame_%s' % name
+ if key not in self.group._v_children:
+ node = self.handle.createGroup(self.group, key)
+ else:
+ node = getattr(self.group, key)
+ s = SparseFrameStorer(self.parent, node)
+ s.write(sdf)
+
+class BlockManagerStorer(GenericStorer):
+ attributes = ['ndim','nblocks']
+ is_shape_reversed = False
+
+ @property
+ def shape(self):
+ try:
+ ndim = self.ndim
+
+ # items
+ items = 0
+ for i in range(self.nblocks):
+ node = getattr(self.group, 'block%d_items' % i)
+ shape = getattr(node,'shape',None)
+ if shape is not None:
+ items += shape[0]
+
+ # data shape
+ node = getattr(self.group, 'block0_values')
+ shape = getattr(node,'shape',None)
+ if shape is not None:
+ shape = list(shape[0:(ndim-1)])
+ else:
+ shape = []
+
+ shape.append(items)
+
+ # hacky - this works for frames, but is reversed for panels
+ if self.is_shape_reversed:
+ shape = shape[::-1]
+
+ return "[%s]" % ','.join([ str(x) for x in shape ])
+ except:
+ return None
+
+ def read(self, **kwargs):
+ axes = []
+ for i in xrange(self.ndim):
+ ax = self.read_index('axis%d' % i)
+ axes.append(ax)
+
+ items = axes[0]
+ blocks = []
+ for i in range(self.nblocks):
+ blk_items = self.read_index('block%d_items' % i)
+ values = self.read_array('block%d_values' % i)
+ blk = make_block(values, blk_items, items)
+ blocks.append(blk)
+
+ return self.obj_type(BlockManager(blocks, axes))
+
+ def write(self, obj, **kwargs):
+ data = obj._data
+ if not data.is_consolidated():
+ data = data.consolidate()
+
+ self.attrs.ndim = data.ndim
+ for i, ax in enumerate(data.axes):
+ self.write_index('axis%d' % i, ax)
+
+ # Supporting mixed-type DataFrame objects...nontrivial
+ self.attrs.nblocks = nblocks = len(data.blocks)
+ for i in range(nblocks):
+ blk = data.blocks[i]
+ # I have no idea why, but writing values before items fixed #2299
+ self.write_array('block%d_values' % i, blk.values)
+ self.write_index('block%d_items' % i, blk.items)
+
+class FrameStorer(BlockManagerStorer):
+ pandas_kind = 'frame'
+ obj_type = DataFrame
+
+class PanelStorer(BlockManagerStorer):
+ pandas_kind = 'wide'
+ obj_type = Panel
+ is_shape_reversed = True
+
+ def write(self, obj, **kwargs):
+ obj._consolidate_inplace()
+ return super(PanelStorer, self).write(obj, **kwargs)
+
+class Table(Storer):
+ """ represent a table:
+ facilitate read/write of various types of tables
+
+ Attrs in Table Node
+ -------------------
+ These are attributes that are store in the main table node, they are necessary
+ to recreate these tables when read back in.
+
+ index_axes : a list of tuples of the (original indexing axis and index column)
+ non_index_axes: a list of tuples of the (original index axis and columns on a non-indexing axis)
+ values_axes : a list of the columns which comprise the data of this table
+ data_columns : a list of the columns that we are allowing indexing (these become single columns in values_axes)
+ nan_rep : the string to use for nan representations for string objects
+ levels : the names of levels
+
+ """
+ pandas_kind = 'wide_table'
+ table_type = None
+ levels = 1
+ is_table = True
+
+ def __init__(self, *args, **kwargs):
+ super(Table, self).__init__(*args, **kwargs)
+ self.index_axes = []
+ self.non_index_axes = []
+ self.values_axes = []
+ self.data_columns = []
+ self.nan_rep = None
+ self.selection = None
+
+ @property
+ def table_type_short(self):
+ return self.table_type.split('_')[0]
+
+ def __repr__(self):
+ """ return a pretty representatgion of myself """
+ self.infer_axes()
+ dc = ",dc->[%s]" % ','.join(self.data_columns) if len(self.data_columns) else ''
+
+ ver = ''
+ if self.is_old_version:
+ ver = "[%s]" % '.'.join([ str(x) for x in self.version ])
+
+ return "%-12.12s%s (typ->%s,nrows->%s,ncols->%s,indexers->[%s]%s)" % (self.pandas_type,
+ ver,
+ self.table_type_short,
+ self.nrows,
+ self.ncols,
+ ','.join([ a.name for a in self.index_axes ]),
+ dc)
+
+ def __getitem__(self, c):
+ """ return the axis for c """
+ for a in self.axes:
+ if c == a.name:
+ return a
+ return None
+
+ def validate(self, other):
+ """ validate against an existing table """
+ if other is None: return
+
+ if other.table_type != self.table_type:
+ raise TypeError("incompatible table_type with existing [%s - %s]" %
+ (other.table_type, self.table_type))
+
+ for c in ['index_axes','non_index_axes','values_axes']:
+ if getattr(self,c,None) != getattr(other,c,None):
+ raise Exception("invalid combinate of [%s] on appending data [%s] vs current table [%s]" % (c,getattr(self,c,None),getattr(other,c,None)))
+
+ @property
+ def nrows_expected(self):
+ """ based on our axes, compute the expected nrows """
+ return np.prod([ i.cvalues.shape[0] for i in self.index_axes ])
+
+ @property
+ def is_exists(self):
+ """ has this table been created """
+ return 'table' in self.group
+
+ @property
+ def storable(self):
+ return getattr(self.group,'table',None)
+
+ @property
+ def table(self):
+ """ return the table group (this is my storable) """
+ return self.storable
+
@property
def description(self):
return self.table.description
@@ -1581,6 +1892,11 @@ def description(self):
def axes(self):
return itertools.chain(self.index_axes, self.values_axes)
+ @property
+ def ncols(self):
+ """ the number of total columns in the values axes """
+ return sum([ len(a.values) for a in self.values_axes ])
+
@property
def is_transposed(self):
return False
@@ -1617,12 +1933,22 @@ def set_attrs(self):
self.attrs.nan_rep = self.nan_rep
self.attrs.levels = self.levels
- def validate_version(self, where=None):
+ def get_attrs(self):
+ """ retrieve our attributes """
+ self.non_index_axes = getattr(self.attrs,'non_index_axes',None) or []
+ self.data_columns = getattr(self.attrs,'data_columns',None) or []
+ self.nan_rep = getattr(self.attrs,'nan_rep',None)
+ self.levels = getattr(self.attrs,'levels',None) or []
+ t = self.table
+ self.index_axes = [ a.infer(t) for a in self.indexables if a.is_an_indexable ]
+ self.values_axes = [ a.infer(t) for a in self.indexables if not a.is_an_indexable ]
+
+ def validate_version(self, where = None):
""" are we trying to operate on an old version? """
if where is not None:
if self.version[0] <= 0 and self.version[1] <= 10 and self.version[2] < 1:
- warnings.warn("where criteria is being ignored as we this version is too old (or not-defined) [%s]"
- % '.'.join([str(x) for x in self.version]), IncompatibilityWarning)
+ ws = incompatibility_doc % '.'.join([ str(x) for x in self.version ])
+ warnings.warn(ws, IncompatibilityWarning)
@property
def indexables(self):
@@ -1730,27 +2056,6 @@ def read_axes(self, where, **kwargs):
return True
- def infer_axes(self):
- """ infer the axes from the indexables:
- return a boolean indicating if we have a valid table or not """
-
- table = self.table
- if table is None:
- return False
-
- self.non_index_axes = getattr(
- self.attrs, 'non_index_axes', None) or []
- self.data_columns = getattr(
- self.attrs, 'data_columns', None) or []
- self.nan_rep = getattr(self.attrs, 'nan_rep', None)
- self.levels = getattr(
- self.attrs, 'levels', None) or []
- self.index_axes = [a.infer(
- self.table) for a in self.indexables if a.is_an_indexable]
- self.values_axes = [a.infer(
- self.table) for a in self.indexables if not a.is_an_indexable]
- return True
-
def get_object(self, obj):
""" return the data for this obj """
return obj
@@ -1770,13 +2075,18 @@ def create_axes(self, axes, obj, validate=True, nan_rep=None, data_columns=None,
"""
+ # set the default axes if needed
+ if axes is None:
+ axes = _AXES_MAP[type(obj)]
+
# map axes to numbers
axes = [obj._get_axis_number(a) for a in axes]
# do we have an existing table (if so, use its axes & data_columns)
if self.infer_axes():
existing_table = self.copy()
- axes = [a.axis for a in existing_table.index_axes]
+ existing_table.infer_axes()
+ axes = [ a.axis for a in existing_table.index_axes]
data_columns = existing_table.data_columns
nan_rep = existing_table.nan_rep
else:
@@ -1940,10 +2250,6 @@ def create_description(self, complib=None, complevel=None, fletcher32=False, exp
return d
- def read(self, **kwargs):
- raise NotImplementedError(
- "cannot read on an abstract table: subclasses should implement")
-
def read_coordinates(self, where=None, **kwargs):
""" select coordinates (row numbers) from a table; return the coordinates object """
@@ -1981,18 +2287,6 @@ def read_column(self, column, **kwargs):
raise KeyError("column [%s] not found in the table" % column)
- def write(self, **kwargs):
- raise NotImplementedError("cannot write on an abstract table")
-
- def delete(self, where=None, **kwargs):
- """ support fully deleting the node in its entirety (only) - where specification must be None """
- if where is None:
- self.handle.removeNode(self.group, recursive=True)
- return None
-
- raise NotImplementedError("cannot delete on an abstract table")
-
-
class WORMTable(Table):
""" a write-once read-many table: this format DOES NOT ALLOW appending to a
table. writing is a one-time operation the data are stored in a format
@@ -2113,6 +2407,7 @@ def read(self, where=None, columns=None, **kwargs):
class LegacyFrameTable(LegacyTable):
""" support the legacy frame table """
+ pandas_kind = 'frame_table'
table_type = 'legacy_frame'
obj_type = Panel
@@ -2131,20 +2426,18 @@ class AppendableTable(LegacyTable):
_indexables = None
table_type = 'appendable'
- def write(self, axes, obj, append=False, complib=None,
+ def write(self, obj, axes=None, append=False, complib=None,
complevel=None, fletcher32=None, min_itemsize=None, chunksize=50000,
expectedrows=None, **kwargs):
- # create the table if it doesn't exist (or get it if it does)
- if not append:
- if 'table' in self.group:
- self.handle.removeNode(self.group, 'table')
+ if not append and self.is_exists:
+ self.handle.removeNode(self.group, 'table')
# create the axes
self.create_axes(axes=axes, obj=obj, validate=append,
min_itemsize=min_itemsize, **kwargs)
- if 'table' not in self.group:
+ if not self.is_exists:
# create the table
options = self.create_description(complib=complib,
@@ -2280,6 +2573,7 @@ def delete(self, where=None, **kwargs):
class AppendableFrameTable(AppendableTable):
""" suppor the new appendable table formats """
+ pandas_kind = 'frame_table'
table_type = 'appendable_frame'
ndim = 2
obj_type = DataFrame
@@ -2380,50 +2674,6 @@ class AppendableNDimTable(AppendablePanelTable):
ndim = 4
obj_type = Panel4D
-# table maps
-_TABLE_MAP = {
- 'appendable_frame': AppendableFrameTable,
- 'appendable_multiframe': AppendableMultiFrameTable,
- 'appendable_panel': AppendablePanelTable,
- 'appendable_ndim': AppendableNDimTable,
- 'worm': WORMTable,
- 'legacy_frame': LegacyFrameTable,
- 'legacy_panel': LegacyPanelTable,
- 'default': AppendablePanelTable,
-}
-
-
-def create_table(parent, group, typ=None, **kwargs):
- """ return a suitable Table class to operate """
-
- pt = getattr(group._v_attrs, 'pandas_type', None)
- tt = getattr(group._v_attrs, 'table_type', None) or typ
-
- # a new node
- if pt is None:
-
- return (_TABLE_MAP.get(typ) or _TABLE_MAP.get('default'))(parent, group, **kwargs)
-
- # existing node (legacy)
- if tt is None:
-
- # distiguish between a frame/table
- tt = 'legacy_panel'
- try:
- fields = group.table._v_attrs.fields
- if len(fields) == 1 and fields[0] == 'value':
- tt = 'legacy_frame'
- except:
- pass
-
- return _TABLE_MAP.get(tt)(parent, group, **kwargs)
-
-
-def _itemsize_string_array(arr):
- """ return the maximum size of elements in a strnig array """
- return max([str_len(arr[v].ravel()).max() for v in range(arr.shape[0])])
-
-
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
@@ -2472,36 +2722,6 @@ def _convert_index(index):
atom = _tables().ObjectAtom()
return IndexCol(np.asarray(values, dtype='O'), 'object', atom)
-
-def _read_array(group, key):
- import tables
- node = getattr(group, key)
- data = node[:]
- attrs = node._v_attrs
-
- transposed = getattr(attrs, 'transposed', False)
-
- if isinstance(node, tables.VLArray):
- ret = data[0]
- else:
- dtype = getattr(attrs, 'value_type', None)
- shape = getattr(attrs, 'shape', None)
-
- if shape is not None:
- # length 0 axis
- ret = np.empty(shape, dtype=dtype)
- else:
- ret = data
-
- if dtype == 'datetime64':
- ret = np.array(ret, dtype='M8[ns]')
-
- if transposed:
- return ret.T
- else:
- return ret
-
-
def _unconvert_index(data, kind):
if kind == 'datetime64':
index = DatetimeIndex(data)
@@ -2518,7 +2738,6 @@ def _unconvert_index(data, kind):
raise ValueError('unrecognized index type %s' % kind)
return index
-
def _unconvert_index_legacy(data, kind, legacy=False):
if kind == 'datetime':
index = lib.time64_to_datetime(data)
@@ -2528,7 +2747,6 @@ def _unconvert_index_legacy(data, kind, legacy=False):
raise ValueError('unrecognized index type %s' % kind)
return index
-
def _maybe_convert(values, val_kind):
if _need_convert(val_kind):
conv = _get_converter(val_kind)
@@ -2536,7 +2754,6 @@ def _maybe_convert(values, val_kind):
values = conv(values)
return values
-
def _get_converter(kind):
if kind == 'datetime64':
return lambda x: np.array(x, dtype='M8[ns]')
@@ -2545,38 +2762,11 @@ def _get_converter(kind):
else: # pragma: no cover
raise ValueError('invalid kind %s' % kind)
-
def _need_convert(kind):
if kind in ('datetime', 'datetime64'):
return True
return False
-
-def _is_table_type(group):
- try:
- return 'table' in group._v_attrs.pandas_type
- except AttributeError:
- # new node, e.g.
- return False
-
-_index_type_map = {DatetimeIndex: 'datetime',
- PeriodIndex: 'period'}
-
-_reverse_index_map = {}
-for k, v in _index_type_map.iteritems():
- _reverse_index_map[v] = k
-
-
-def _class_to_alias(cls):
- return _index_type_map.get(cls, '')
-
-
-def _alias_to_class(alias):
- if isinstance(alias, type): # pragma: no cover
- return alias # compat: for a short period of time master stored types
- return _reverse_index_map.get(alias, Index)
-
-
class Term(object):
""" create a term object that holds a field, op, and value
@@ -2744,7 +2934,7 @@ def eval(self):
def convert_value(self, v):
""" convert the expression that is in the term to something that is accepted by pytables """
- if self.kind == 'datetime64':
+ if self.kind == 'datetime64' or self.kind == 'datetime' :
return [lib.Timestamp(v).value, None]
elif isinstance(v, datetime) or hasattr(v, 'timetuple') or self.kind == 'date':
return [time.mktime(v.timetuple()), None]
@@ -2850,10 +3040,3 @@ def select_coords(self):
return self.table.table.getWhereList(self.condition, start=self.start, stop=self.stop, sort=True)
-def _get_index_factory(klass):
- if klass == DatetimeIndex:
- def f(values, freq=None, tz=None):
- return DatetimeIndex._simple_new(values, None, freq=freq,
- tz=tz)
- return f
- return klass
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index f2490505b89c6..cfa0f2e2bfe05 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -37,7 +37,10 @@ def setUp(self):
def tearDown(self):
self.store.close()
- os.remove(self.path)
+ try:
+ os.remove(self.path)
+ except:
+ pass
def test_factory_fun(self):
try:
@@ -73,6 +76,26 @@ def test_repr(self):
self.store['d'] = tm.makePanel()
self.store['foo/bar'] = tm.makePanel()
self.store.append('e', tm.makePanel())
+
+ df = tm.makeDataFrame()
+ df['obj1'] = 'foo'
+ df['obj2'] = 'bar'
+ df['bool1'] = df['A'] > 0
+ df['bool2'] = df['B'] > 0
+ df['bool3'] = True
+ df['int1'] = 1
+ df['int2'] = 2
+ df['timestamp1'] = Timestamp('20010102')
+ df['timestamp2'] = Timestamp('20010103')
+ df['datetime1'] = datetime.datetime(2001,1,2,0,0)
+ df['datetime2'] = datetime.datetime(2001,1,3,0,0)
+ df.ix[3:6,['obj1']] = np.nan
+ df = df.consolidate().convert_objects()
+ self.store['df'] = df
+
+ # make a random group in hdf space
+ self.store.handle.createGroup(self.store.handle.root,'bah')
+
repr(self.store)
str(self.store)
@@ -172,11 +195,11 @@ def test_put(self):
# node does not currently exist, test _is_table_type returns False in
# this case
- self.assertRaises(
- ValueError, self.store.put, 'f', df[10:], append=True)
+ #self.store.remove('f')
+ #self.assertRaises(ValueError, self.store.put, 'f', df[10:], append=True)
- # OK
- self.store.put('c', df[10:], append=True)
+ # can't put to a table (use append instead)
+ self.assertRaises(ValueError, self.store.put, 'c', df[10:], append=True)
# overwrite table
self.store.put('c', df[:10], table=True, append=False)
@@ -398,9 +421,8 @@ def test_append_with_strings(self):
wp2 = wp.rename_axis(
dict([(x, "%s_extra" % x) for x in wp.minor_axis]), axis=2)
- def check_col(key, name, size):
- self.assert_(getattr(self.store.get_table(
- key).table.description, name).itemsize == size)
+ def check_col(key,name,size):
+ self.assert_(getattr(self.store.get_storer(key).table.description,name).itemsize == size)
self.store.append('s1', wp, min_itemsize=20)
self.store.append('s1', wp2)
@@ -499,9 +521,8 @@ def test_append_with_data_columns(self):
tm.assert_frame_equal(result, expected)
# using min_itemsize and a data column
- def check_col(key, name, size):
- self.assert_(getattr(self.store.get_table(
- key).table.description, name).itemsize == size)
+ def check_col(key,name,size):
+ self.assert_(getattr(self.store.get_storer(key).table.description,name).itemsize == size)
self.store.remove('df')
self.store.append('df', df_new, data_columns=['string'],
@@ -575,8 +596,8 @@ def check_col(key, name, size):
def test_create_table_index(self):
- def col(t, column):
- return getattr(self.store.get_table(t).table.cols, column)
+ def col(t,column):
+ return getattr(self.store.get_storer(t).table.cols,column)
# index=False
wp = tm.makePanel()
@@ -591,7 +612,7 @@ def col(t, column):
assert(col('p5i', 'minor_axis').is_indexed is True)
# default optlevels
- self.store.get_table('p5').create_index()
+ self.store.get_storer('p5').create_index()
assert(col('p5', 'major_axis').index.optlevel == 6)
assert(col('p5', 'minor_axis').index.kind == 'medium')
@@ -626,6 +647,7 @@ def col(t, column):
assert(col('f2', 'string2').is_indexed is False)
# try to index a non-table
+ self.store.remove('f2')
self.store.put('f2', df)
self.assertRaises(Exception, self.store.create_table_index, 'f2')
@@ -761,6 +783,20 @@ def test_append_hierarchical(self):
def test_append_misc(self):
+ # unsuported data types for non-tables
+ p4d = tm.makePanel4D()
+ self.assertRaises(Exception, self.store.put,'p4d',p4d)
+
+ # unsupported data type for table
+ s = tm.makeStringSeries()
+ self.assertRaises(Exception, self.store.append,'s',s)
+
+ # unsuported data types
+ self.assertRaises(Exception, self.store.put,'abc',None)
+ self.assertRaises(Exception, self.store.put,'abc','123')
+ self.assertRaises(Exception, self.store.put,'abc',123)
+ self.assertRaises(Exception, self.store.put,'abc',np.arange(5))
+
df = tm.makeDataFrame()
self.store.append('df', df, chunksize=1)
result = self.store.select('df')
@@ -1421,6 +1457,14 @@ def test_select(self):
expected = df[df.A > 0].reindex(columns=['C', 'D'])
tm.assert_frame_equal(expected, result)
+ # with a Timestamp data column (GH #2637)
+ df = DataFrame(dict(ts=bdate_range('2012-01-01', periods=300), A=np.random.randn(300)))
+ self.store.remove('df')
+ self.store.append('df', df, data_columns=['ts', 'A'])
+ result = self.store.select('df', [Term('ts', '>=', Timestamp('2012-02-01'))])
+ expected = df[df.ts >= Timestamp('2012-02-01')]
+ tm.assert_frame_equal(expected, result)
+
def test_panel_select(self):
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
@@ -1735,6 +1779,66 @@ def test_legacy_0_10_read(self):
store.select(k)
store.close()
+ def test_copy(self):
+ pth = curpath()
+ def do_copy(f = None, new_f = None, keys = None, propindexes = True, **kwargs):
+ try:
+ import os
+
+ if f is None:
+ f = os.path.join(pth, 'legacy_0.10.h5')
+
+ store = HDFStore(f, 'r')
+
+ if new_f is None:
+ import tempfile
+ new_f = tempfile.mkstemp()[1]
+
+ tstore = store.copy(new_f, keys = keys, propindexes = propindexes, **kwargs)
+
+ # check keys
+ if keys is None:
+ keys = store.keys()
+ self.assert_(set(keys) == set(tstore.keys()))
+
+ # check indicies & nrows
+ for k in tstore.keys():
+ if tstore.is_table(k):
+ new_t = tstore.get_storer(k)
+ orig_t = store.get_storer(k)
+
+ self.assert_(orig_t.nrows == new_t.nrows)
+ for a in orig_t.axes:
+ if a.is_indexed:
+ self.assert_(new_t[a.name].is_indexed == True)
+
+ except:
+ pass
+ finally:
+ store.close()
+ tstore.close()
+ import os
+ try:
+ os.remove(new_f)
+ except:
+ pass
+
+ do_copy()
+ do_copy(keys = ['df'])
+ do_copy(propindexes = False)
+
+ # new table
+ df = tm.makeDataFrame()
+ try:
+ st = HDFStore(self.scratchpath)
+ st.append('df', df, data_columns = ['A'])
+ st.close()
+ do_copy(f = self.scratchpath)
+ do_copy(f = self.scratchpath, propindexes = False)
+ finally:
+ import os
+ os.remove(self.scratchpath)
+
def test_legacy_table_write(self):
raise nose.SkipTest
# legacy table types
| - added copy method to allow file upgrades to new version
- refactor to add Storer class to represent all non-tables
- add shape field to display for non-tables
- close GH #2637
| https://api.github.com/repos/pandas-dev/pandas/pulls/2607 | 2012-12-28T16:51:51Z | 2013-01-06T04:33:36Z | 2013-01-06T04:33:35Z | 2014-06-24T17:11:46Z |
CLN: was accidently writing to the legacy_table.h5 file | diff --git a/pandas/io/tests/legacy_table.h5 b/pandas/io/tests/legacy_table.h5
index 5f4089efc15c3..1c90382d9125c 100644
Binary files a/pandas/io/tests/legacy_table.h5 and b/pandas/io/tests/legacy_table.h5 differ
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 6f11ebdaaa7b3..86b183c7bfc76 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1654,11 +1654,11 @@ def test_legacy_table_read(self):
# force the frame
store.select('df2', typ = 'legacy_frame')
- self.assertRaises(Exception, store.select, 'wp1', Term('minor_axis','=','B'))
-
# old version warning
import warnings
warnings.filterwarnings('ignore', category=IncompatibilityWarning)
+ self.assertRaises(Exception, store.select, 'wp1', Term('minor_axis','=','B'))
+
df2 = store.select('df2')
store.select('df2', Term('index', '>', df2.index[2]))
warnings.filterwarnings('always', category=IncompatibilityWarning)
@@ -1674,16 +1674,17 @@ def test_legacy_0_10_read(self):
store.close()
def test_legacy_table_write(self):
+ raise nose.SkipTest
# legacy table types
pth = curpath()
df = tm.makeDataFrame()
wp = tm.makePanel()
store = HDFStore(os.path.join(pth, 'legacy_table.h5'), 'a')
-
+
self.assertRaises(Exception, store.append, 'df1', df)
self.assertRaises(Exception, store.append, 'wp1', wp)
-
+
store.close()
def test_store_datetime_fractional_secs(self):
| https://api.github.com/repos/pandas-dev/pandas/pulls/2606 | 2012-12-28T15:29:19Z | 2012-12-28T15:33:52Z | null | 2012-12-28T15:33:52Z | |
BUG: provide for automatic conversion of object -> datetime64 | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 15f6cb6412c78..c7a0dd6c6e179 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -628,6 +628,28 @@ def _consensus_name_attr(objs):
#----------------------------------------------------------------------
# Lots of little utilities
+def _possibly_cast_to_datetime(value, dtype):
+ """ try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
+
+ if dtype == 'M8[ns]':
+ if np.isscalar(value):
+ if value == tslib.iNaT or isnull(value):
+ value = tslib.iNaT
+ else:
+ value = np.array(value)
+
+ # have a scalar array-like (e.g. NaT)
+ if value.ndim == 0:
+ value = tslib.iNaT
+
+ # we have an array of datetime & nulls
+ elif np.prod(value.shape):
+ try:
+ value = tslib.array_to_datetime(value)
+ except:
+ pass
+
+ return value
def _infer_dtype(value):
if isinstance(value, (float, np.floating)):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b3fef8943baf3..49b667b7f0bb4 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4218,7 +4218,13 @@ def applymap(self, func):
-------
applied : DataFrame
"""
- return self.apply(lambda x: lib.map_infer(x, func))
+
+ # if we have a dtype == 'M8[ns]', provide boxed values
+ def infer(x):
+ if x.dtype == 'M8[ns]':
+ x = lib.map_infer(x, lib.Timestamp)
+ return lib.map_infer(x, func)
+ return self.apply(infer)
#----------------------------------------------------------------------
# Merging / joining methods
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index f8c977a3b9015..53eb18c12f172 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -111,6 +111,7 @@ def _setitem_with_indexer(self, indexer, value):
data = self.obj[item]
values = data.values
if np.prod(values.shape):
+ value = com._possibly_cast_to_datetime(value,getattr(data,'dtype',None))
values[plane_indexer] = value
except ValueError:
for item, v in zip(item_labels[het_idx], value):
@@ -118,6 +119,7 @@ def _setitem_with_indexer(self, indexer, value):
values = data.values
if np.prod(values.shape):
values[plane_indexer] = v
+
else:
if isinstance(indexer, tuple):
indexer = _maybe_convert_ix(*indexer)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 639141e4edba6..57844656bf113 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -446,6 +446,7 @@ def get_values(self, dtype):
def make_block(values, items, ref_items):
dtype = values.dtype
vtype = dtype.type
+ klass = None
if issubclass(vtype, np.floating):
klass = FloatBlock
@@ -459,7 +460,21 @@ def make_block(values, items, ref_items):
klass = IntBlock
elif dtype == np.bool_:
klass = BoolBlock
- else:
+
+ # try to infer a datetimeblock
+ if klass is None and np.prod(values.shape):
+ flat = values.flatten()
+ inferred_type = lib.infer_dtype(flat)
+ if inferred_type == 'datetime':
+
+ # we have an object array that has been inferred as datetime, so convert it
+ try:
+ values = tslib.array_to_datetime(flat).reshape(values.shape)
+ klass = DatetimeBlock
+ except: # it already object, so leave it
+ pass
+
+ if klass is None:
klass = ObjectBlock
return klass(values, items, ref_items, ndim=values.ndim)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 6cf511d32bfb3..7ffdc1051ee63 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2983,12 +2983,13 @@ def _sanitize_array(data, index, dtype=None, copy=False,
def _try_cast(arr):
try:
- subarr = np.array(data, dtype=dtype, copy=copy)
+ arr = com._possibly_cast_to_datetime(arr, dtype)
+ subarr = np.array(arr, dtype=dtype, copy=copy)
except (ValueError, TypeError):
if dtype is not None and raise_cast_failure:
raise
else: # pragma: no cover
- subarr = np.array(data, dtype=object, copy=copy)
+ subarr = np.array(arr, dtype=object, copy=copy)
return subarr
# GH #846
@@ -3047,6 +3048,8 @@ def _try_cast(arr):
value, dtype = _dtype_from_scalar(value)
subarr = np.empty(len(index), dtype=dtype)
else:
+ # need to possibly convert the value here
+ value = com._possibly_cast_to_datetime(value, dtype)
subarr = np.empty(len(index), dtype=dtype)
subarr.fill(value)
else:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index cf485f70ffbc8..462812296c9da 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -21,7 +21,7 @@
import pandas.core.format as fmt
import pandas.core.datetools as datetools
from pandas.core.api import (DataFrame, Index, Series, notnull, isnull,
- MultiIndex, DatetimeIndex)
+ MultiIndex, DatetimeIndex, Timestamp)
from pandas.io.parsers import read_csv
from pandas.util.testing import (assert_almost_equal,
@@ -1073,6 +1073,36 @@ def test_setitem_single_column_mixed(self):
expected = [nan, 'qux', nan, 'qux', nan]
assert_almost_equal(df['str'].values, expected)
+ def test_setitem_single_column_mixed_datetime(self):
+ df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'],
+ columns=['foo', 'bar', 'baz'])
+
+ df['timestamp'] = Timestamp('20010102')
+
+ # check our dtypes
+ result = df.get_dtype_counts()
+ expected = Series({ 'float64' : 3, 'datetime64[ns]' : 1})
+ assert_series_equal(result, expected)
+
+ # set an allowable datetime64 type
+ from pandas import tslib
+ df.ix['b','timestamp'] = tslib.iNaT
+ self.assert_(com.isnull(df.ix['b','timestamp']))
+
+ # allow this syntax
+ df.ix['c','timestamp'] = nan
+ self.assert_(com.isnull(df.ix['c','timestamp']))
+
+ # allow this syntax
+ df.ix['d',:] = nan
+ self.assert_(com.isnull(df.ix['c',:]).all() == False)
+
+ # try to set with a list like item
+ self.assertRaises(Exception, df.ix.__setitem__, ('d','timestamp'), [nan])
+
+ # prior to 0.10.1 this failed
+ #self.assertRaises(TypeError, df.ix.__setitem__, ('c','timestamp'), nan)
+
def test_setitem_frame(self):
piece = self.frame.ix[:2, ['A', 'B']]
self.frame.ix[-2:, ['A', 'B']] = piece.values
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 1b0065c18923f..111b1e69bb823 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -351,6 +351,26 @@ def test_constructor_dtype_nocast(self):
s2[1] = 5
self.assertEquals(s[1], 5)
+ def test_constructor_dtype_datetime64(self):
+ import pandas.tslib as tslib
+
+ s = Series(tslib.iNaT,dtype='M8[ns]',index=range(5))
+ self.assert_(isnull(s).all() == True)
+
+ s = Series(tslib.NaT,dtype='M8[ns]',index=range(5))
+ self.assert_(isnull(s).all() == True)
+
+ s = Series(nan,dtype='M8[ns]',index=range(5))
+ self.assert_(isnull(s).all() == True)
+
+ s = Series([ datetime(2001,1,2,0,0), tslib.iNaT ],dtype='M8[ns]')
+ self.assert_(isnull(s[1]) == True)
+ self.assert_(s.dtype == 'M8[ns]')
+
+ s = Series([ datetime(2001,1,2,0,0), nan ],dtype='M8[ns]')
+ self.assert_(isnull(s[1]) == True)
+ self.assert_(s.dtype == 'M8[ns]')
+
def test_constructor_dict(self):
d = {'a' : 0., 'b' : 1., 'c' : 2.}
result = Series(d, index=['b', 'c', 'd', 'a'])
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 855bbd02489bb..60b2e989ea683 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1235,9 +1235,9 @@ def test_append_concat(self):
def test_set_dataframe_column_ns_dtype(self):
x = DataFrame([datetime.now(), datetime.now()])
- self.assert_(x[0].dtype == object)
+ #self.assert_(x[0].dtype == object)
- x[0] = to_datetime(x[0])
+ #x[0] = to_datetime(x[0])
self.assert_(x[0].dtype == np.dtype('M8[ns]'))
def test_groupby_count_dateparseerror(self):
@@ -2066,10 +2066,11 @@ def test_get_level_values_box(self):
def test_frame_apply_dont_convert_datetime64(self):
from pandas.tseries.offsets import BDay
df = DataFrame({'x1': [datetime(1996,1,1)]})
+
df = df.applymap(lambda x: x+BDay())
df = df.applymap(lambda x: x+BDay())
- self.assertTrue(df.x1.dtype == object)
+ self.assertTrue(df.x1.dtype == 'M8[ns]')
class TestLegacyCompat(unittest.TestCase):
| - construct series that have M8[ns] dtype even in the presence of NaN (rather than object)
see below.
- allow np.nan to be used in place of NaT in series construction (requires explicity dtype to be passed)
e.g. `Series(np.nan,index=range(5),dtype='M8[ns]')`
- bug fix in applymap (tseries/tests/test_timeseries/test_frame_apply_dont_convert_datetime64), because explicit
conversion of the series to M8ns, now has the function operating on datetime64 values and ufuncs dont' work for some reason,
explicity required conversion to Timestamp values make this work (same issue as in GH #2591)
```
In [1]: import numpy as np
In [2]: import pandas as pd
```
This currently works
```
In [12]: df = pd.DataFrame(np.random.randn(8,2),pd.date_range('20010102',periods=8),columns=['A','B'])
In [13]: df['timestamp'] = pd.Timestamp('20010103')
In [14]: x = df.convert_objects()
In [15]: x
Out[15]:
A B timestamp
2001-01-02 -0.228005 -1.376432 2001-01-03 00:00:00
2001-01-03 -0.555251 0.277356 2001-01-03 00:00:00
2001-01-04 -0.295289 0.534981 2001-01-03 00:00:00
2001-01-05 -0.332815 -1.436975 2001-01-03 00:00:00
2001-01-06 -1.848818 0.405773 2001-01-03 00:00:00
2001-01-07 -0.734903 -0.174140 2001-01-03 00:00:00
2001-01-08 -0.388864 -0.765329 2001-01-03 00:00:00
2001-01-09 -0.341015 0.170630 2001-01-03 00:00:00
In [16]: x.get_dtype_counts()
Out[16]:
datetime64[ns] 1
float64 2
In [21]: x.ix[3,'timestamp'] = iNaT
In [22]: x
Out[22]:
A B timestamp
2001-01-02 -0.228005 -1.376432 2001-01-03 00:00:00
2001-01-03 -0.555251 0.277356 2001-01-03 00:00:00
2001-01-04 -0.295289 0.534981 2001-01-03 00:00:00
2001-01-05 -0.332815 -1.436975 NaT
2001-01-06 -1.848818 0.405773 2001-01-03 00:00:00
2001-01-07 -0.734903 -0.174140 2001-01-03 00:00:00
2001-01-08 -0.388864 -0.765329 2001-01-03 00:00:00
2001-01-09 -0.341015 0.170630 2001-01-03 00:00:00
```
This leaves you in limbo, as you have basically a datatime64 column, but with a float nan in it (so it has to stay object)
```
In [3]: df = pd.DataFrame(np.random.randn(8,2),pd.date_range('20010102',periods=8),columns=['A','B'])
In [4]: df
Out[4]:
A B
2001-01-02 0.939393 -2.524448
2001-01-03 -1.059561 0.104651
2001-01-04 -0.842478 1.033888
2001-01-05 -1.009903 -0.334782
2001-01-06 -0.452043 -0.382408
2001-01-07 -0.058516 1.162884
2001-01-08 0.660251 0.688290
2001-01-09 0.069637 0.366915
In [5]: df['timestamp'] = pd.Timestamp('20010103')
In [6]: df
Out[6]:
A B timestamp
2001-01-02 0.939393 -2.524448 2001-01-03 00:00:00
2001-01-03 -1.059561 0.104651 2001-01-03 00:00:00
2001-01-04 -0.842478 1.033888 2001-01-03 00:00:00
2001-01-05 -1.009903 -0.334782 2001-01-03 00:00:00
2001-01-06 -0.452043 -0.382408 2001-01-03 00:00:00
2001-01-07 -0.058516 1.162884 2001-01-03 00:00:00
2001-01-08 0.660251 0.688290 2001-01-03 00:00:00
2001-01-09 0.069637 0.366915 2001-01-03 00:00:00
In [7]: df.get_dtype_counts()
Out[7]:
float64 2
object 1
In [9]: df.ix[3,'timestamp'] = np.nan
In [10]: df
Out[10]:
A B timestamp
2001-01-02 0.939393 -2.524448 2001-01-03 00:00:00
2001-01-03 -1.059561 0.104651 2001-01-03 00:00:00
2001-01-04 -0.842478 1.033888 2001-01-03 00:00:00
2001-01-05 -1.009903 -0.334782 NaN
2001-01-06 -0.452043 -0.382408 2001-01-03 00:00:00
2001-01-07 -0.058516 1.162884 2001-01-03 00:00:00
2001-01-08 0.660251 0.688290 2001-01-03 00:00:00
2001-01-09 0.069637 0.366915 2001-01-03 00:00:00
In [11]: df.convert_objects().get_dtype_counts()
Out[11]:
float64 2
object 1
```
This PR enables this syntax:
```
In [3]: df = pd.DataFrame(np.random.randn(8,2),pd.date_range('20010102',periods=8),columns=['A','B'])
In [4]: df['timestamp'] = pd.Timestamp('20010103')
# datetime64[ns] out of the box
In [5]: df.get_dtype_counts()
Out[5]:
datetime64[ns] 1
float64 2
# use the traditional nan, which is mapped to iNaT internally
In [6]: df.ix[2:4,'timestamp'] = np.nan
In [7]: df
Out[7]:
A B timestamp
2001-01-02 0.234354 -3.167626 2001-01-03 00:00:00
2001-01-03 0.141300 -0.548670 2001-01-03 00:00:00
2001-01-04 0.738884 -0.104497 NaT
2001-01-05 0.873046 1.193593 NaT
2001-01-06 1.174226 1.697421 2001-01-03 00:00:00
2001-01-07 -0.069534 -0.145611 2001-01-03 00:00:00
2001-01-08 -0.951426 -1.158884 2001-01-03 00:00:00
2001-01-09 -0.322041 -0.800181 2001-01-03 00:00:00
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2595 | 2012-12-25T04:48:59Z | 2012-12-28T16:40:22Z | null | 2014-06-19T17:13:25Z |
ENH: IO World Bank WDI | diff --git a/.travis.yml b/.travis.yml
index ab36df340e1c5..e7ace279f1063 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -3,7 +3,7 @@ language: python
python:
- 2.6
- 2.7
- - 3.1 # travis will soon EOL this
+# - 3.1 # travis EOL
- 3.2
- 3.3
@@ -45,8 +45,10 @@ before_install:
install:
- echo "Waldo2"
- ci/install.sh
- - ci/print_versions.py # not including stats
script:
- echo "Waldo3"
- ci/script.sh
+
+after_script:
+ - ci/print_versions.py
diff --git a/ci/before_install.sh b/ci/before_install.sh
index 7b7919a41ba4e..9561c713d0f2e 100755
--- a/ci/before_install.sh
+++ b/ci/before_install.sh
@@ -9,20 +9,20 @@ fi
sudo apt-get update $APT_ARGS # run apt-get update for all versions
-# hack for broken 3.3 env
-if [ x"$VIRTUAL_ENV" == x"" ]; then
- VIRTUAL_ENV=~/virtualenv/python$TRAVIS_PYTHON_VERSION_with_system_site_packages;
-fi
+# # hack for broken 3.3 env
+# if [ x"$VIRTUAL_ENV" == x"" ]; then
+# VIRTUAL_ENV=~/virtualenv/python$TRAVIS_PYTHON_VERSION_with_system_site_packages;
+# fi
-# we only recreate the virtualenv for 3.x
-# since the "Detach bug" only affects python3
-# and travis has numpy preinstalled on 2.x which is quicker
-_VENV=$VIRTUAL_ENV # save it
-if [ ${TRAVIS_PYTHON_VERSION:0:1} == "3" ] ; then
- deactivate # pop out of any venv
- sudo pip install virtualenv==1.8.4 --upgrade
- sudo apt-get install $APT_ARGS python3.3 python3.3-dev
- sudo rm -Rf $_VENV
- virtualenv -p python$TRAVIS_PYTHON_VERSION $_VENV --system-site-packages;
- source $_VENV/bin/activate
-fi
+# # we only recreate the virtualenv for 3.x
+# # since the "Detach bug" only affects python3
+# # and travis has numpy preinstalled on 2.x which is quicker
+# _VENV=$VIRTUAL_ENV # save it
+# if [ ${TRAVIS_PYTHON_VERSION:0:1} == "3" ] ; then
+# deactivate # pop out of any venv
+# sudo pip install virtualenv==1.8.4 --upgrade
+# sudo apt-get install $APT_ARGS python3.3 python3.3-dev
+# sudo rm -Rf $_VENV
+# virtualenv -p python$TRAVIS_PYTHON_VERSION $_VENV --system-site-packages;
+# source $_VENV/bin/activate
+# fi
diff --git a/ci/install.sh b/ci/install.sh
index ce1b5f667b03b..6874da69880b2 100755
--- a/ci/install.sh
+++ b/ci/install.sh
@@ -21,7 +21,7 @@ fi
if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ] || \
[ ${TRAVIS_PYTHON_VERSION} == "3.1" ] || \
[ ${TRAVIS_PYTHON_VERSION} == "3.2" ]; then
- pip $PIP_ARGS install numpy; #https://github.com/y-p/numpy/archive/1.6.2_with_travis_fix.tar.gz;
+ pip $PIP_ARGS install numpy;
else
pip $PIP_ARGS install https://github.com/numpy/numpy/archive/v1.7.0b2.tar.gz;
fi
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 30bb4eb9ac096..b3fef8943baf3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3240,7 +3240,7 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
Parameters
----------
- method : {'backfill', 'bfill', 'pad', 'ffill', None}, default 'pad'
+ method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
Method to use for filling holes in reindexed Series
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use NEXT valid observation to fill gap
diff --git a/pandas/io/tests/test_wb.py b/pandas/io/tests/test_wb.py
new file mode 100644
index 0000000000000..5bcdd22c989fe
--- /dev/null
+++ b/pandas/io/tests/test_wb.py
@@ -0,0 +1,30 @@
+import pandas
+from pandas.util.testing import network
+from pandas.util.testing import assert_frame_equal
+from numpy.testing.decorators import slow
+from pandas.io.wdi import (search, download)
+
+@slow
+@network
+def test_wdi_search():
+ expected = {u'id': {2634: u'GDPPCKD',
+ 4649: u'NY.GDP.PCAP.KD',
+ 4651: u'NY.GDP.PCAP.KN',
+ 4653: u'NY.GDP.PCAP.PP.KD'},
+ u'name': {2634: u'GDP per Capita, constant US$, millions',
+ 4649: u'GDP per capita (constant 2000 US$)',
+ 4651: u'GDP per capita (constant LCU)',
+ 4653: u'GDP per capita, PPP (constant 2005 international $)'}}
+ result = search('gdp.*capita.*constant').ix[:,:2]
+ assert_frame_equal(result, pandas.DataFrame(expected))
+
+@slow
+@network
+def test_wdi_download():
+ expected = {'GDPPCKN': {(u'United States', u'2003'): u'40800.0735367688', (u'Canada', u'2004'): u'37857.1261134552', (u'United States', u'2005'): u'42714.8594790102', (u'Canada', u'2003'): u'37081.4575704003', (u'United States', u'2004'): u'41826.1728310667', (u'Mexico', u'2003'): u'72720.0691255285', (u'Mexico', u'2004'): u'74751.6003347038', (u'Mexico', u'2005'): u'76200.2154469437', (u'Canada', u'2005'): u'38617.4563629611'}, 'GDPPCKD': {(u'United States', u'2003'): u'40800.0735367688', (u'Canada', u'2004'): u'34397.055116118', (u'United States', u'2005'): u'42714.8594790102', (u'Canada', u'2003'): u'33692.2812368928', (u'United States', u'2004'): u'41826.1728310667', (u'Mexico', u'2003'): u'7608.43848670658', (u'Mexico', u'2004'): u'7820.99026814334', (u'Mexico', u'2005'): u'7972.55364129367', (u'Canada', u'2005'): u'35087.8925933298'}}
+ expected = pandas.DataFrame(expected)
+ result = download(country=['CA','MX','US', 'junk'], indicator=['GDPPCKD',
+ 'GDPPCKN', 'junk'], start=2003, end=2005)
+ expected.index = result.index
+ assert_frame_equal(result, pandas.DataFrame(expected))
+
diff --git a/pandas/io/wb.py b/pandas/io/wb.py
new file mode 100644
index 0000000000000..1270d9b0dd28f
--- /dev/null
+++ b/pandas/io/wb.py
@@ -0,0 +1,183 @@
+import urllib2
+import warnings
+import json
+import pandas
+import numpy as np
+
+def download(country=['MX','CA','US'], indicator=['GDPPCKD','GDPPCKN'],
+ start=2003, end=2005):
+ """
+ Download data series from the World Bank's World Development Indicators
+
+ Parameters
+ ----------
+
+ indicator: string or list of strings
+ taken from the ``id`` field in ``WDIsearch()``
+ country: string or list of strings.
+ ``all`` downloads data for all countries
+ ISO-2 character codes select individual countries (e.g.``US``,``CA``)
+ start: int
+ First year of the data series
+ end: int
+ Last year of the data series (inclusive)
+
+ Returns
+ -------
+
+ ``pandas`` DataFrame with columns: country, iso2c, year, indicator value.
+ """
+
+ # Are ISO-2 country codes valid?
+ valid_countries = ["AG", "AL", "AM", "AO", "AR", "AT", "AU", "AZ", "BB",
+ "BD", "BE", "BF", "BG", "BH", "BI", "BJ", "BO", "BR", "BS", "BW",
+ "BY", "BZ", "CA", "CD", "CF", "CG", "CH", "CI", "CL", "CM", "CN",
+ "CO", "CR", "CV", "CY", "CZ", "DE", "DK", "DM", "DO", "DZ", "EC",
+ "EE", "EG", "ER", "ES", "ET", "FI", "FJ", "FR", "GA", "GB", "GE",
+ "GH", "GM", "GN", "GQ", "GR", "GT", "GW", "GY", "HK", "HN", "HR",
+ "HT", "HU", "ID", "IE", "IL", "IN", "IR", "IS", "IT", "JM", "JO",
+ "JP", "KE", "KG", "KH", "KM", "KR", "KW", "KZ", "LA", "LB", "LC",
+ "LK", "LS", "LT", "LU", "LV", "MA", "MD", "MG", "MK", "ML", "MN",
+ "MR", "MU", "MW", "MX", "MY", "MZ", "NA", "NE", "NG", "NI", "NL",
+ "NO", "NP", "NZ", "OM", "PA", "PE", "PG", "PH", "PK", "PL", "PT",
+ "PY", "RO", "RU", "RW", "SA", "SB", "SC", "SD", "SE", "SG", "SI",
+ "SK", "SL", "SN", "SR", "SV", "SY", "SZ", "TD", "TG", "TH", "TN",
+ "TR", "TT", "TW", "TZ", "UA", "UG", "US", "UY", "UZ", "VC", "VE",
+ "VN", "VU", "YE", "ZA", "ZM", "ZW", "all"]
+ if type(country) == str:
+ country = [country]
+ bad_countries = np.setdiff1d(country, valid_countries)
+ country = np.intersect1d(country, valid_countries)
+ country = ';'.join(country)
+ # Work with a list of indicators
+ if type(indicator) == str:
+ indicator = [indicator]
+ # Download
+ data = []
+ bad_indicators = []
+ for ind in indicator:
+ try:
+ tmp = _get_data(ind, country, start, end)
+ tmp.columns = ['country', 'iso2c', 'year', ind]
+ data.append(tmp)
+ except:
+ bad_indicators.append(ind)
+ # Warn
+ if len(bad_indicators) > 0:
+ print 'Failed to obtain indicator(s): ' + '; '.join(bad_indicators)
+ print 'The data may still be available for download at http://data.worldbank.org'
+ if len(bad_countries) > 0:
+ print 'Invalid ISO-2 codes: ' + ' '.join(bad_countries)
+ # Merge WDI series
+ if len(data) > 0:
+ out = reduce(lambda x,y: x.merge(y, how='outer'), data)
+ # Clean
+ out = out.drop('iso2c', axis=1)
+ out = out.set_index(['country', 'year'])
+ return out
+
+
+def _get_data(indicator = "NY.GNS.ICTR.GN.ZS", country = 'US',
+ start = 2002, end = 2005):
+ # Build URL for api call
+ url = "http://api.worldbank.org/countries/" + country + "/indicators/" + \
+ indicator + "?date=" + str(start) + ":" + str(end) + "&per_page=25000" + \
+ "&format=json"
+ # Download
+ response = urllib2.urlopen(url)
+ data = response.read()
+ # Parse JSON file
+ data = json.loads(data)[1]
+ country = map(lambda x: x['country']['value'], data)
+ iso2c = map(lambda x: x['country']['id'], data)
+ year = map(lambda x: x['date'], data)
+ value = map(lambda x: x['value'], data)
+ # Prepare output
+ out = pandas.DataFrame([country, iso2c, year, value]).T
+ return out
+
+
+def get_countries():
+ '''Query information about countries
+ '''
+ url = 'http://api.worldbank.org/countries/all?format=json'
+ response = urllib2.urlopen(url)
+ data = response.read()
+ data = json.loads(data)[1]
+ data = pandas.DataFrame(data)
+ data.adminregion = map(lambda x: x['value'], data.adminregion)
+ data.incomeLevel = map(lambda x: x['value'], data.incomeLevel)
+ data.lendingType = map(lambda x: x['value'], data.lendingType)
+ data.region = map(lambda x: x['value'], data.region)
+ data = data.rename(columns={'id':'iso3c', 'iso2Code':'iso2c'})
+ return data
+
+
+def get_indicators():
+ '''Download information about all World Bank data series
+ '''
+ url = 'http://api.worldbank.org/indicators?per_page=50000&format=json'
+ response = urllib2.urlopen(url)
+ data = response.read()
+ data = json.loads(data)[1]
+ data = pandas.DataFrame(data)
+ # Clean fields
+ data.source = map(lambda x: x['value'], data.source)
+ fun = lambda x: x.encode('ascii', 'ignore')
+ data.sourceOrganization = data.sourceOrganization.apply(fun)
+ # Clean topic field
+ def get_value(x):
+ try:
+ return x['value']
+ except:
+ return ''
+ fun = lambda x: map(lambda y: get_value(y), x)
+ data.topics = data.topics.apply(fun)
+ data.topics = data.topics.apply(lambda x: ' ; '.join(x))
+ # Clean outpu
+ data = data.sort(columns='id')
+ data.index = pandas.Index(range(data.shape[0]))
+ return data
+
+
+_cached_series = None
+def search(string='gdp.*capi', field='name', case=False):
+ """
+ Search available data series from the world bank
+
+ Parameters
+ ----------
+
+ string: string
+ regular expression
+ field: string
+ id, name, source, sourceNote, sourceOrganization, topics
+ See notes below
+ case: bool
+ case sensitive search?
+
+ Notes
+ -----
+
+ The first time this function is run it will download and cache the full
+ list of available series. Depending on the speed of your network
+ connection, this can take time. Subsequent searches will use the cached
+ copy, so they should be much faster.
+
+ id : Data series indicator (for use with the ``indicator`` argument of
+ ``WDI()``) e.g. NY.GNS.ICTR.GN.ZS"
+ name: Short description of the data series
+ source: Data collection project
+ sourceOrganization: Data collection organization
+ note:
+ sourceNote:
+ topics:
+ """
+ # Create cached list of series if it does not exist
+ global _cached_series
+ if type(_cached_series) is not pandas.core.frame.DataFrame:
+ _cached_series = get_indicators()
+ data = _cached_series[field]
+ idx = data.str.contains(string, case=case)
+ out = _cached_series.ix[idx].dropna()
+ return out
| The World Bank's World Development Indicators (WDI) is a collection of nearly 8000 data series in country-year format. These data cover a wide range of topics (e.g. demography, economics), and they are freely available freely through the World Bank API.
http://data.worldbank.org/
This PR introduces a very simple io module to search and downloading data from the WDI. Usage example: http://nbviewer.ipython.org/4365089/
I don't know the best way to test features such as this one, so I may need a bit of help there.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2592 | 2012-12-23T18:58:15Z | 2012-12-28T16:01:42Z | null | 2014-06-14T16:53:15Z |
Update force_unicode documentation | diff --git a/pandas/core/format.py b/pandas/core/format.py
index e185049b821dc..862dea1ba5ed4 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -55,7 +55,8 @@
index_names : bool, optional
Prints the names of the indexes, default True
force_unicode : bool, default False
- Always return a unicode result
+ Always return a unicode result. Deprecated in v0.10.0 as string
+ formatting is now rendered to unicode by default.
Returns
-------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b3fef8943baf3..e178b2c192263 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1517,7 +1517,7 @@ def to_string(self, buf=None, columns=None, col_space=None, colSpace=None,
def to_html(self, buf=None, columns=None, col_space=None, colSpace=None,
header=True, index=True, na_rep='NaN', formatters=None,
float_format=None, sparsify=None, index_names=True,
- justify=None, force_unicode=False, bold_rows=True,
+ justify=None, force_unicode=None, bold_rows=True,
classes=None):
"""
to_html-specific options
@@ -1527,8 +1527,12 @@ def to_html(self, buf=None, columns=None, col_space=None, colSpace=None,
Render a DataFrame to an html table.
"""
+ import warnings
+ if force_unicode is not None: # pragma: no cover
+ warnings.warn("force_unicode is deprecated, it will have no "
+ "effect", FutureWarning)
+
if colSpace is not None: # pragma: no cover
- import warnings
warnings.warn("colSpace is deprecated, use col_space",
FutureWarning)
col_space = colSpace
@@ -1551,7 +1555,7 @@ def to_html(self, buf=None, columns=None, col_space=None, colSpace=None,
def to_latex(self, buf=None, columns=None, col_space=None, colSpace=None,
header=True, index=True, na_rep='NaN', formatters=None,
float_format=None, sparsify=None, index_names=True,
- bold_rows=True):
+ bold_rows=True, force_unicode=None):
"""
to_latex-specific options
bold_rows : boolean, default True
@@ -1560,6 +1564,17 @@ def to_latex(self, buf=None, columns=None, col_space=None, colSpace=None,
Render a DataFrame to a tabular environment table.
You can splice this into a LaTeX document.
"""
+
+ import warnings
+ if force_unicode is not None: # pragma: no cover
+ warnings.warn("force_unicode is deprecated, it will have no "
+ "effect", FutureWarning)
+
+ if colSpace is not None: # pragma: no cover
+ warnings.warn("colSpace is deprecated, use col_space",
+ FutureWarning)
+ col_space = colSpace
+
formatter = fmt.DataFrameFormatter(self, buf=buf, columns=columns,
col_space=col_space, na_rep=na_rep,
header=header, index=index,
| Should resolve #2053.
For consistency, I've re-added the `force_unicode` parameter to `to_latex()` and have it now raise a deprecation warning. I've also updated the common docstring for the `to_*` methods to have a brief explanation as to why the parameter is deprecated.
There were a couple other missing deprecation warnings (`force_unicode` in `to_html()` and `colSpace` in `to_latex()`) that should now have consistent behavior with other methods.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2590 | 2012-12-23T13:10:29Z | 2012-12-28T13:49:54Z | null | 2012-12-28T13:49:54Z |
Add some default parameters to TextParser in _parse_options_data() | diff --git a/pandas/io/data.py b/pandas/io/data.py
index e4c1ae9a9817a..ef73494814f2e 100644
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -223,7 +223,8 @@ def _parse_options_data(table):
rows = table.findAll('tr')
header = _unpack(rows[0], kind='th')
data = [_unpack(r) for r in rows[1:]]
- return TextParser(data, names=header).get_chunk()
+ # Use ',' as a thousands separator as we're pulling from the US site.
+ return TextParser(data, names=header, na_values=['N/A'], thousands=',').get_chunk()
class Options(object):
| The Options class can return some awkward dtypes:
``` python
In [1]: from pandas.io.data import *
In [2]: aapl = Options('aapl')
In [3]: a = aapl.get_put_data(1,2013)
In [4]: a.dtypes
Out[4]:
Strike object
Symbol object
Last float64
Chg float64
Bid object
Ask float64
Vol object
Open Int object
```
There are two causes for Strike/Bid/Vol/Open Int having `object` dtypes: not parsing the NaN value and not passing a thousands separator.
``` python
In [5]: a.Bid[:5]
Out[5]:
0 0.01
1 N/A
2 N/A
3 N/A
4 N/A
Name: Bid
In [6]: a['Open Int'][:5]
Out[6]:
0 8,744
1 2,381
2 1,117
3 4,586
4 1,535
Name: Open Int
```
I've checked with a couple non-US proxies and it seems that finance.yahoo.com (the hardcoded url) does not redirect to the local version (yahoo.com does redirect). Since we're getting the US site, we can pass `','` as the thousands separator and `'N/A'`as the NaN value.
These changes yield the following dtypes:
``` python
In [4]: a.dtypes
Out[4]:
Strike float64
Symbol object
Last float64
Chg float64
Bid float64
Ask float64
Vol int64
Open Int int64
In [5]: a[:5]
Out[5]:
Strike Symbol Last Chg Bid Ask Vol Open Int
0 135 AAPL130119P00135000 0.02 0.01 0.01 0.03 5 8744
1 140 AAPL130119P00140000 0.02 0.00 NaN 0.04 11 2381
2 145 AAPL130119P00145000 0.01 0.03 NaN 0.11 3 1117
3 150 AAPL130119P00150000 0.01 0.00 NaN 0.11 20 4586
4 155 AAPL130119P00155000 0.01 0.00 NaN 0.11 4 1535
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2589 | 2012-12-23T09:33:22Z | 2012-12-28T15:09:42Z | null | 2012-12-28T15:09:42Z |
CLN: Remove a useless parameter in Series.iteritem method. | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 6cf511d32bfb3..c7f4d6e6f43b8 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1054,7 +1054,7 @@ def __iter__(self):
else:
return iter(self.values)
- def iteritems(self, index=True):
+ def iteritems(self):
"""
Lazily iterate over (index, value) tuples
"""
| Hi,
I was reading some docstring of some `Series` & `DataFrame` methods from IPython. I wondered what the parameter `index` of the method `Series.iteritems` -- and thus the `iterkv` method -- did. I think it's not used. Tests passed.
Cheers,
Damien G.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2584 | 2012-12-22T18:04:37Z | 2012-12-28T15:12:07Z | null | 2012-12-28T15:12:07Z |
Update doc/source/groupby.rst | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 4a37222b6dd4c..141d3f9f6b4da 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -16,7 +16,7 @@
Group By: split-apply-combine
*****************************
-By "group by" we are refer to a process involving one or more of the following
+By "group by" we are referring to a process involving one or more of the following
steps
- **Splitting** the data into groups based on some criteria
| Fixed a small typo in the docs
| https://api.github.com/repos/pandas-dev/pandas/pulls/2581 | 2012-12-22T05:05:07Z | 2012-12-25T16:04:11Z | 2012-12-25T16:04:11Z | 2012-12-25T16:04:15Z |
Update pandas/core/format.py | diff --git a/pandas/core/format.py b/pandas/core/format.py
index e185049b821dc..10b2b0fc5f8cf 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -1442,7 +1442,7 @@ def set_eng_float_format(precision=None, accuracy=3, use_eng_prefix=False):
def _put_lines(buf, lines):
if any(isinstance(x, unicode) for x in lines):
- lines = [unicode(x) for x in lines]
+ lines = [x.decode('utf-8', errors='ignore').encode('utf-8') for x in lines]
buf.write('\n'.join(lines))
def _binify(cols, width):
| Fixes #2576.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2577 | 2012-12-21T16:16:49Z | 2012-12-28T15:49:57Z | null | 2014-08-11T01:06:12Z |
remove unused kh_del, pack kh_hash flag bits | diff --git a/pandas/hashtable.pyx b/pandas/hashtable.pyx
index f20462405ca21..164fc8c94924e 100644
--- a/pandas/hashtable.pyx
+++ b/pandas/hashtable.pyx
@@ -225,8 +225,6 @@ cdef class StringHashTable(HashTable):
if k == self.table.n_buckets:
k = kh_put_str(self.table, buf, &ret)
# print 'putting %s, %s' % (val, count)
- if not ret:
- kh_del_str(self.table, k)
count += 1
uniques.append(val)
@@ -254,8 +252,6 @@ cdef class StringHashTable(HashTable):
else:
k = kh_put_str(self.table, buf, &ret)
# print 'putting %s, %s' % (val, count)
- if not ret:
- kh_del_str(self.table, k)
self.table.vals[k] = count
reverse[count] = val
@@ -356,8 +352,6 @@ cdef class Int32HashTable(HashTable):
labels[i] = idx
else:
k = kh_put_int32(self.table, val, &ret)
- if not ret:
- kh_del_int32(self.table, k)
self.table.vals[k] = count
reverse[count] = val
labels[i] = count
diff --git a/pandas/src/klib/khash.h b/pandas/src/klib/khash.h
index 89a086632c1ce..4350ff06f37f0 100644
--- a/pandas/src/klib/khash.h
+++ b/pandas/src/klib/khash.h
@@ -145,13 +145,14 @@ typedef double khfloat64_t;
typedef khint32_t khint_t;
typedef khint_t khiter_t;
-#define __ac_isempty(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&2)
-#define __ac_isdel(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&1)
-#define __ac_iseither(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&3)
-#define __ac_set_isdel_false(flag, i) (flag[i>>4]&=~(1ul<<((i&0xfU)<<1)))
-#define __ac_set_isempty_false(flag, i) (flag[i>>4]&=~(2ul<<((i&0xfU)<<1)))
-#define __ac_set_isboth_false(flag, i) (flag[i>>4]&=~(3ul<<((i&0xfU)<<1)))
-#define __ac_set_isdel_true(flag, i) (flag[i>>4]|=1ul<<((i&0xfU)<<1))
+#define __ac_isempty(flag, i) ((flag[i>>5]>>(i&0x1fU))&1)
+#define __ac_isdel(flag, i) (0)
+#define __ac_iseither(flag, i) __ac_isempty(flag, i)
+#define __ac_set_isdel_false(flag, i) (0)
+#define __ac_set_isempty_false(flag, i) (flag[i>>5]&=~(1ul<<(i&0x1fU)))
+#define __ac_set_isempty_true(flag, i) (flag[i>>5]|=(1ul<<(i&0x1fU)))
+#define __ac_set_isboth_false(flag, i) __ac_set_isempty_false(flag, i)
+#define __ac_set_isdel_true(flag, i) (0)
#ifdef KHASH_LINEAR
#define __ac_inc(k, m) 1
@@ -159,7 +160,7 @@ typedef khint_t khiter_t;
#define __ac_inc(k, m) (((k)>>3 ^ (k)<<3) | 1) & (m)
#endif
-#define __ac_fsize(m) ((m) < 16? 1 : (m)>>4)
+#define __ac_fsize(m) ((m) < 32? 1 : (m)>>5)
#ifndef kroundup32
#define kroundup32(x) (--(x), (x)|=(x)>>1, (x)|=(x)>>2, (x)|=(x)>>4, (x)|=(x)>>8, (x)|=(x)>>16, ++(x))
@@ -231,7 +232,7 @@ static const double __ac_HASH_UPPER = 0.77;
if (h->size >= (khint_t)(new_n_buckets * __ac_HASH_UPPER + 0.5)) j = 0; /* requested size is too small */ \
else { /* hash table size to be changed (shrink or expand); rehash */ \
new_flags = (khint32_t*)malloc(__ac_fsize(new_n_buckets) * sizeof(khint32_t)); \
- memset(new_flags, 0xaa, __ac_fsize(new_n_buckets) * sizeof(khint32_t)); \
+ memset(new_flags, 0xff, __ac_fsize(new_n_buckets) * sizeof(khint32_t)); \
if (h->n_buckets < new_n_buckets) { /* expand */ \
h->keys = (khkey_t*)realloc(h->keys, new_n_buckets * sizeof(khkey_t)); \
if (kh_is_map) h->vals = (khval_t*)realloc(h->vals, new_n_buckets * sizeof(khval_t)); \
@@ -246,7 +247,7 @@ static const double __ac_HASH_UPPER = 0.77;
khint_t new_mask; \
new_mask = new_n_buckets - 1; \
if (kh_is_map) val = h->vals[j]; \
- __ac_set_isdel_true(h->flags, j); \
+ __ac_set_isempty_true(h->flags, j); \
while (1) { /* kick-out process; sort of like in Cuckoo hashing */ \
khint_t inc, k, i; \
k = __hash_func(key); \
@@ -257,7 +258,7 @@ static const double __ac_HASH_UPPER = 0.77;
if (i < h->n_buckets && __ac_iseither(h->flags, i) == 0) { /* kick out the existing element */ \
{ khkey_t tmp = h->keys[i]; h->keys[i] = key; key = tmp; } \
if (kh_is_map) { khval_t tmp = h->vals[i]; h->vals[i] = val; val = tmp; } \
- __ac_set_isdel_true(h->flags, i); /* mark it as deleted in the old hash table */ \
+ __ac_set_isempty_true(h->flags, i); /* mark it as deleted in the old hash table */ \
} else { /* write the element and jump out of the loop */ \
h->keys[i] = key; \
if (kh_is_map) h->vals[i] = val; \
| The kh_hash library includes the ability to delete elements, but this is not used anywhere in pandas. In the khash library, there are two flags bits for every entry: ''isempty' and 'isdel'. This change removes the support for kh_del from the khash library, allowing it to use half as many flag bits.
All tests pass. I ran test_perf ~90 times, and got the results below (baseline = parent commit, times and ratios ± stdev). There do seem to be a few (at the bottom) whose times increase slightly, which surprises me.
```
name head baseline ratio
series_value_counts_int64 1.8630 ± 0.0067 2.0982 ± 0.0131 0.8880 ± 0.0063
groupby_frame_singlekey_integer 1.6717 ± 0.0510 1.8728 ± 0.0508 0.8932 ± 0.0355
index_int64_intersection 24.6203 ± 0.3258 27.0208 ± 0.2654 0.9112 ± 0.0149
groupby_last 2.5098 ± 0.0265 2.7299 ± 0.0252 0.9195 ± 0.0129
stat_ops_level_series_sum 1.5520 ± 0.0257 1.6864 ± 0.0237 0.9205 ± 0.0200
groupby_first 2.5838 ± 0.0281 2.8024 ± 0.0269 0.9221 ± 0.0134
sparse_series_to_frame 120.5008 ± 0.7108 129.8587 ± 1.1512 0.9280 ± 0.0098
datetimeindex_add_offset 0.1599 ± 0.0009 0.1715 ± 0.0013 0.9325 ± 0.0088
stat_ops_level_series_sum_multiple 5.2357 ± 0.1633 5.5971 ± 0.1624 0.9362 ± 0.0390
stat_ops_level_frame_sum_multiple 6.1508 ± 0.0653 6.5037 ± 0.0922 0.9459 ± 0.0166
read_csv_comment2 20.9364 ± 0.7056 22.0784 ± 0.6586 0.9491 ± 0.0421
groupby_multi_size 24.8205 ± 0.2378 26.1333 ± 0.2586 0.9499 ± 0.0131
stat_ops_level_frame_sum 2.3653 ± 0.1711 2.5029 ± 0.1835 0.9500 ± 0.0966
timeseries_add_irregular 17.1444 ± 0.1979 17.9006 ± 0.2983 0.9580 ± 0.0188
groupby_frame_median 5.0956 ± 0.0642 5.3151 ± 0.0801 0.9589 ± 0.0184
match_strings 0.3116 ± 0.0042 0.3250 ± 0.0052 0.9590 ± 0.0200
timeseries_infer_freq 7.0249 ± 0.1438 7.3208 ± 0.0869 0.9597 ± 0.0232
dataframe_reindex 0.2581 ± 0.0017 0.2680 ± 0.0019 0.9633 ± 0.0092
reshape_pivot_time_series 158.4428 ± 1.6615 164.4806 ± 2.1991 0.9635 ± 0.0164
index_int64_union 67.5624 ± 0.5516 69.9954 ± 0.5946 0.9653 ± 0.0113
reindex_multiindex 0.8870 ± 0.0103 0.9166 ± 0.0103 0.9678 ± 0.0155
groupby_simple_compress_timing 30.2154 ± 0.2435 31.1854 ± 0.2564 0.9690 ± 0.0111
groupby_indices 6.2456 ± 0.0674 6.4342 ± 0.0803 0.9708 ± 0.0158
panel_from_dict_all_different_indexes 64.0143 ± 0.3919 65.8792 ± 0.4151 0.9717 ± 0.0085
join_dataframe_integer_key 1.3216 ± 0.0212 1.3559 ± 0.0206 0.9750 ± 0.0215
panel_from_dict_two_different_indexes 43.3036 ± 0.3092 44.2583 ± 0.3232 0.9785 ± 0.0100
join_dataframe_integer_2key 3.8902 ± 0.0218 3.9749 ± 0.0241 0.9787 ± 0.0081
reshape_unstack_simple 2.3005 ± 0.0093 2.3416 ± 0.0115 0.9825 ± 0.0062
groupby_multi_series_op 12.2459 ± 0.0667 12.4435 ± 0.0809 0.9842 ± 0.0083
groupby_multi_cython 13.6919 ± 0.0685 13.8917 ± 0.0937 0.9857 ± 0.0082
concat_series_axis1 50.1771 ± 0.7017 50.8870 ± 0.8207 0.9863 ± 0.0210
unstack_sparse_keyspace 0.9630 ± 0.0071 0.9743 ± 0.0086 0.9884 ± 0.0113
frame_fancy_lookup_all 21.7081 ± 0.2252 21.9861 ± 0.8485 0.9885 ± 0.0303
groupby_frame_apply_overhead 15.0398 ± 0.2534 15.2089 ± 0.3118 0.9893 ± 0.0260
frame_insert_500_columns 74.6911 ± 0.5674 75.3304 ± 0.7608 0.9916 ± 0.0124
sort_level_one 3.2917 ± 0.0885 3.3215 ± 0.1132 0.9921 ± 0.0415
timeseries_timestamp_tzinfo_cons 0.0133 ± 0.0002 0.0134 ± 0.0003 0.9922 ± 0.0253
groupby_multi_python 51.7335 ± 0.9452 52.1447 ± 1.1834 0.9926 ± 0.0286
stats_rank_average 31.3220 ± 1.3618 31.5993 ± 1.2585 0.9928 ± 0.0591
read_csv_standard 9.4535 ± 0.1348 9.5228 ± 0.1152 0.9929 ± 0.0185
frame_reindex_both_axes 0.2233 ± 0.0014 0.2248 ± 0.0018 0.9933 ± 0.0098
timeseries_large_lookup_value 0.0179 ± 0.0003 0.0180 ± 0.0004 0.9941 ± 0.0239
frame_reindex_both_axes_ix 0.2794 ± 0.0017 0.2809 ± 0.0020 0.9947 ± 0.0092
frame_iteritems 1.8838 ± 0.0227 1.8937 ± 0.0279 0.9950 ± 0.0188
series_align_irregular_string 60.9044 ± 0.7522 61.2000 ± 0.6323 0.9953 ± 0.0160
groupbym_frame_apply 60.5675 ± 0.9349 60.8692 ± 0.9463 0.9953 ± 0.0217
stats_rank2d_axis1_average 13.5227 ± 0.8773 13.6383 ± 0.8856 0.9957 ± 0.0909
frame_to_csv 277.9836 ± 3.7095 279.2427 ± 5.0754 0.9958 ± 0.0222
read_csv_thou_vb 26.4393 ± 0.2321 26.5419 ± 0.2718 0.9962 ± 0.0134
indexing_dataframe_boolean_rows 0.1765 ± 0.0011 0.1772 ± 0.0015 0.9965 ± 0.0102
frame_ctor_list_of_dict 67.3875 ± 0.8141 67.6344 ± 0.9001 0.9965 ± 0.0177
frame_get_numeric_data 0.0459 ± 0.0004 0.0461 ± 0.0007 0.9966 ± 0.0169
frame_iteritems_cached 0.0639 ± 0.0020 0.0642 ± 0.0024 0.9967 ± 0.0471
groupby_multi_different_functions 11.8200 ± 0.0624 11.8596 ± 0.0932 0.9967 ± 0.0093
append_frame_single_mixed 0.9737 ± 0.0846 0.9841 ± 0.0847 0.9967 ± 0.1209
timeseries_asof_single 0.0377 ± 0.0005 0.0378 ± 0.0006 0.9968 ± 0.0197
timeseries_slice_minutely 0.0414 ± 0.0005 0.0415 ± 0.0005 0.9969 ± 0.0164
groupby_apply_dict_return 34.7286 ± 0.3478 34.8450 ± 0.5679 0.9969 ± 0.0188
period_setitem 746.4594 ± 8.6212 748.9278 ± 11.0465 0.9969 ± 0.0185
concat_small_frames 10.6045 ± 0.0675 10.6383 ± 0.1020 0.9969 ± 0.0114
frame_constructor_ndarray 0.0334 ± 0.0003 0.0335 ± 0.0005 0.9971 ± 0.0182
write_csv_standard 267.4531 ± 3.9582 268.3362 ± 5.3285 0.9971 ± 0.0243
groupby_multi_different_numpy_functions 11.8358 ± 0.0759 11.8699 ± 0.0738 0.9972 ± 0.0089
reindex_daterange_pad 0.1449 ± 0.0010 0.1453 ± 0.0014 0.9972 ± 0.0120
indexing_panel_subset 0.4469 ± 0.0033 0.4481 ± 0.0053 0.9976 ± 0.0137
reindex_daterange_backfill 0.1494 ± 0.0010 0.1497 ± 0.0016 0.9977 ± 0.0121
frame_ctor_nested_dict_int64 103.3669 ± 1.0917 103.6315 ± 1.7836 0.9977 ± 0.0198
append_frame_single_homogenous 0.2324 ± 0.0022 0.2330 ± 0.0023 0.9977 ± 0.0136
timeseries_1min_5min_ohlc 0.4956 ± 0.0030 0.4966 ± 0.0039 0.9980 ± 0.0097
reindex_frame_level_align 0.6218 ± 0.0040 0.6230 ± 0.0056 0.9982 ± 0.0110
panel_from_dict_equiv_indexes 22.4439 ± 0.1723 22.4863 ± 0.2186 0.9982 ± 0.0123
series_align_int64_index 32.8477 ± 0.1784 32.9009 ± 0.1936 0.9984 ± 0.0080
panel_from_dict_same_index 22.4174 ± 0.1834 22.4507 ± 0.2267 0.9986 ± 0.0129
read_table_multiple_date 1852.1563 ± 19.4540 1855.0047 ± 27.9213 0.9987 ± 0.0180
timeseries_1min_5min_mean 0.4872 ± 0.0026 0.4878 ± 0.0033 0.9989 ± 0.0087
replace_fillna 1.4792 ± 0.1068 1.4877 ± 0.1059 0.9990 ± 0.0974
sparse_frame_constructor 4.6531 ± 0.0459 4.6582 ± 0.0524 0.9990 ± 0.0148
series_ctor_from_dict 2.5720 ± 0.0111 2.5746 ± 0.0150 0.9990 ± 0.0072
read_table_multiple_date_baseline 837.9644 ± 8.9147 838.9167 ± 12.1464 0.9991 ± 0.0177
frame_to_string_floats 47.9212 ± 0.5630 47.9764 ± 0.7628 0.9991 ± 0.0194
frame_reindex_axis0 1.0288 ± 0.0078 1.0298 ± 0.0119 0.9991 ± 0.0133
frame_reindex_columns 0.2020 ± 0.0016 0.2022 ± 0.0019 0.9991 ± 0.0121
frame_boolean_row_select 0.2075 ± 0.0009 0.2077 ± 0.0014 0.9991 ± 0.0077
frame_fillna_many_columns_pad 11.8545 ± 0.1323 11.8662 ± 0.1362 0.9991 ± 0.0158
series_align_left_monotonic 14.8503 ± 0.0912 14.8613 ± 0.1038 0.9993 ± 0.0093
melt_dataframe 1.0850 ± 0.0078 1.0858 ± 0.0077 0.9993 ± 0.0101
frame_fillna_inplace 11.6751 ± 0.2087 11.6855 ± 0.2025 0.9994 ± 0.0247
frame_drop_dup_na_inplace 2.0111 ± 0.0103 2.0122 ± 0.0096 0.9994 ± 0.0070
groupby_frame_cython_many_columns 2.4077 ± 0.0116 2.4091 ± 0.0141 0.9995 ± 0.0075
frame_ctor_nested_dict 67.9949 ± 0.4608 68.0334 ± 0.4236 0.9995 ± 0.0092
timeseries_asof_nan 7.1119 ± 0.0627 7.1157 ± 0.0601 0.9995 ± 0.0122
frame_reindex_axis1 2.4248 ± 0.0368 2.4263 ± 0.0341 0.9996 ± 0.0203
series_drop_duplicates_int 0.6140 ± 0.0023 0.6142 ± 0.0023 0.9997 ± 0.0053
timeseries_period_downsample_mean 9.3964 ± 0.0262 9.3991 ± 0.0291 0.9997 ± 0.0042
dti_reset_index 0.4529 ± 0.0091 0.4531 ± 0.0085 0.9998 ± 0.0275
indexing_dataframe_boolean_rows_object 0.4054 ± 0.0016 0.4055 ± 0.0022 0.9998 ± 0.0067
stats_rank_average_int 18.1329 ± 0.2115 18.1389 ± 0.2243 0.9998 ± 0.0170
stats_rolling_mean 1.3577 ± 0.0044 1.3578 ± 0.0072 1.0000 ± 0.0062
groupby_pivot_table 16.3420 ± 0.0934 16.3417 ± 0.1068 1.0001 ± 0.0086
sort_level_zero 3.3187 ± 0.1217 3.3238 ± 0.1456 1.0001 ± 0.0524
reindex_fillna_backfill 0.0707 ± 0.0006 0.0707 ± 0.0007 1.0002 ± 0.0125
reindex_fillna_pad 0.0742 ± 0.0019 0.0742 ± 0.0021 1.0003 ± 0.0385
frame_drop_dup_inplace 2.2140 ± 0.0116 2.2133 ± 0.0129 1.0004 ± 0.0078
frame_drop_duplicates 18.3091 ± 0.1064 18.3022 ± 0.1000 1.0004 ± 0.0080
series_drop_duplicates_string 0.4529 ± 0.0036 0.4528 ± 0.0036 1.0004 ± 0.0112
series_value_counts_strings 4.2207 ± 0.0162 4.2188 ± 0.0329 1.0005 ± 0.0085
datetimeindex_normalize 22.4277 ± 0.2073 22.4172 ± 0.1919 1.0005 ± 0.0126
timeseries_to_datetime_iso8601 3.8940 ± 0.0482 3.8915 ± 0.0367 1.0007 ± 0.0152
lib_fast_zip_fillna 13.4962 ± 0.0750 13.4855 ± 0.0651 1.0008 ± 0.0074
timeseries_timestamp_downsample_mean 5.8368 ± 0.0176 5.8310 ± 0.0183 1.0010 ± 0.0044
reshape_stack_simple 1.7456 ± 0.0143 1.7440 ± 0.0154 1.0010 ± 0.0120
frame_drop_duplicates_na 17.9970 ± 0.0999 17.9767 ± 0.1071 1.0012 ± 0.0082
stats_rank2d_axis0_average 21.7999 ± 0.0819 21.7708 ± 0.1766 1.0014 ± 0.0092
timeseries_sort_index 17.6179 ± 0.1460 17.5926 ± 0.1269 1.0015 ± 0.0110
lib_fast_zip 10.6688 ± 0.0536 10.6489 ± 0.0585 1.0019 ± 0.0075
timeseries_asof 7.5577 ± 0.0625 7.5386 ± 0.0509 1.0026 ± 0.0107
stat_ops_series_std 0.5937 ± 0.0111 0.5922 ± 0.0109 1.0030 ± 0.0263
index_datetime_intersection 11.0669 ± 0.0623 11.0133 ± 0.0645 1.0049 ± 0.0081
groupby_series_simple_cython 4.1735 ± 0.0477 4.1508 ± 0.0511 1.0056 ± 0.0166
index_datetime_union 11.1319 ± 0.0546 11.0699 ± 0.0639 1.0056 ± 0.0075
read_csv_vb 18.0190 ± 0.2602 17.8894 ± 0.2271 1.0074 ± 0.0194
frame_fancy_lookup 2.0631 ± 0.0375 2.0469 ± 0.0496 1.0085 ± 0.0304
reindex_frame_level_reindex 1.1200 ± 0.0074 1.1125 ± 0.0541 1.0111 ± 0.0907
replace_replacena 1.6274 ± 0.1358 1.6190 ± 0.1444 1.0128 ± 0.1209
frame_sort_index_by_columns 35.3555 ± 0.2725 34.7723 ± 0.1765 1.0168 ± 0.0094
merge_2intkey_sort 34.2841 ± 0.3041 33.6600 ± 0.2171 1.0186 ± 0.0112
join_dataframe_index_single_key_small 5.6301 ± 0.8275 5.5872 ± 0.7877 1.0278 ± 0.2101
join_dataframe_index_multi 17.5588 ± 0.1660 17.0397 ± 0.1173 1.0305 ± 0.0120
merge_2intkey_nosort 15.8161 ± 0.9392 15.0701 ± 0.9006 1.0533 ± 0.0887
join_dataframe_index_single_key_bigger 9.4991 ± 3.4921 9.4172 ± 3.2368 1.1438 ± 0.5936
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2572 | 2012-12-20T14:49:38Z | 2012-12-28T15:27:33Z | null | 2012-12-28T15:27:33Z |
BUG: DatetimeIndex.unique numpy 1.6 repr #2563 | diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 07dadc66cbc33..adc3fa96b1224 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -560,6 +560,9 @@ def __contains__(self, key):
except (KeyError, TypeError):
return False
+ def unique(self):
+ return self.asobject.unique()
+
def isin(self, values):
"""
Compute boolean array of whether each index value is found in the
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 855bbd02489bb..4f1e93beef5f6 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -61,7 +61,8 @@ def test_is_unique_monotonic(self):
def test_index_unique(self):
uniques = self.dups.index.unique()
- self.assert_(uniques.dtype == 'M8[ns]') # sanity
+ #self.assert_(uniques.dtype == 'M8[ns]') # sanity
+ self.assert_(isinstance(uniques[0], tslib.Timestamp))
def test_index_dupes_contains(self):
d = datetime(2011, 12, 5, 20, 30)
| #2563
| https://api.github.com/repos/pandas-dev/pandas/pulls/2568 | 2012-12-19T17:26:00Z | 2012-12-28T14:10:20Z | null | 2014-06-23T08:06:15Z |
logx for DataFrame.plot() and Series.plot() | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index c9444773ba5cc..e6b73dc2e77b2 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1304,8 +1304,8 @@ class HistPlot(MPLPlot):
def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
sharey=False, use_index=True, figsize=None, grid=False,
legend=True, rot=None, ax=None, style=None, title=None,
- xlim=None, ylim=None, logy=False, xticks=None, yticks=None,
- kind='line', sort_columns=False, fontsize=None,
+ xlim=None, ylim=None, logx=False, logy=False, xticks=None,
+ yticks=None, kind='line', sort_columns=False, fontsize=None,
secondary_y=False, **kwds):
"""
@@ -1342,6 +1342,8 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
kind : {'line', 'bar', 'barh'}
bar : vertical bar plot
barh : horizontal bar plot
+ logx : boolean, default False
+ For line plots, use log scaling on x axis
logy : boolean, default False
For line plots, use log scaling on y axis
xticks : sequence
@@ -1383,17 +1385,17 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
return plot_series(frame[y], label=y, kind=kind, use_index=use_index,
rot=rot, xticks=xticks, yticks=yticks,
xlim=xlim, ylim=ylim, ax=ax, style=style,
- grid=grid, logy=logy, secondary_y=secondary_y,
- title=title, figsize=figsize, fontsize=fontsize,
- **kwds)
+ grid=grid, logx=logx, logy=logy,
+ secondary_y=secondary_y, title=title,
+ figsize=figsize, fontsize=fontsize, **kwds)
plot_obj = klass(frame, kind=kind, subplots=subplots, rot=rot,
legend=legend, ax=ax, style=style, fontsize=fontsize,
use_index=use_index, sharex=sharex, sharey=sharey,
xticks=xticks, yticks=yticks, xlim=xlim, ylim=ylim,
- title=title, grid=grid, figsize=figsize, logy=logy,
- sort_columns=sort_columns, secondary_y=secondary_y,
- **kwds)
+ title=title, grid=grid, figsize=figsize, logx=logx,
+ logy=logy, sort_columns=sort_columns,
+ secondary_y=secondary_y, **kwds)
plot_obj.generate()
plot_obj.draw()
if subplots:
@@ -1403,8 +1405,8 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
def plot_series(series, label=None, kind='line', use_index=True, rot=None,
xticks=None, yticks=None, xlim=None, ylim=None,
- ax=None, style=None, grid=None, legend=False, logy=False,
- secondary_y=False, **kwds):
+ ax=None, style=None, grid=None, legend=False, logx=False,
+ logy=False, secondary_y=False, **kwds):
"""
Plot the input series with the index on the x-axis using matplotlib
@@ -1424,6 +1426,8 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
matplotlib line style to use
ax : matplotlib axis object
If not passed, uses gca()
+ logx : boolean, default False
+ For line plots, use log scaling on x axis
logy : boolean, default False
For line plots, use log scaling on y axis
xticks : sequence
@@ -1464,7 +1468,7 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
if label is None:
label = series.name
- plot_obj = klass(series, kind=kind, rot=rot, logy=logy,
+ plot_obj = klass(series, kind=kind, rot=rot, logx=logx, logy=logy,
ax=ax, use_index=use_index, style=style,
xticks=xticks, yticks=yticks, xlim=xlim, ylim=ylim,
legend=legend, grid=grid, label=label,
| Trivial but convenient, as discussed in https://github.com/pydata/pandas/issues/2327
| https://api.github.com/repos/pandas-dev/pandas/pulls/2565 | 2012-12-19T12:40:27Z | 2012-12-28T15:22:34Z | null | 2014-06-25T01:09:44Z |
ENH: added support for data column queries | diff --git a/RELEASE.rst b/RELEASE.rst
index 63accf42c470d..d2b9952829619 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -22,6 +22,43 @@ Where to get it
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+pandas 0.10.1
+=============
+
+**Release date:** 2013-??-??
+
+**New features**
+
+**Improvements to existing features**
+
+ - ``HDFStore``
+ - enables storing of multi-index dataframes (closes GH1277_)
+ - support data column indexing and selection, via ``data_columns`` keyword in append
+ - support write chunking to reduce memory footprint, via ``chunksize`` keyword to append
+ - support automagic indexing via ``index`` keywork to append
+ - support ``expectedrows`` keyword in append to inform ``PyTables`` about the expected tablesize
+ - support ``start`` and ``stop`` keywords in select to limit the row selection space
+ - added ``get_store`` context manager to automatically import with pandas
+ - added column filtering via ``columns`` keyword in select
+ - added methods append_to_multiple/select_as_multiple/select_as_coordinates to do multiple-table append/selection
+ - added support for datetime64 in columns
+ - added method ``unique`` to select the unique values in an indexable or data column
+
+**Bug fixes**
+
+ - ``HDFStore``
+ - correctly handle ``nan`` elements in string columns; serialize via the ``nan_rep`` keyword to append
+ - raise correctly on non-implemented column types (unicode/date)
+ - handle correctly ``Term`` passed types (e.g. ``index<1000``, when index is ``Int64``), (closes GH512_)
+
+**API Changes**
+
+ - ``HDFStore``
+ - removed keyword ``compression`` from ``put`` (replaced by keyword ``complib`` to be consistent across library)
+
+.. _GH512: https://github.com/pydata/pandas/issues/512
+.. _GH1277: https://github.com/pydata/pandas/issues/1277
+
pandas 0.10.0
=============
diff --git a/doc/source/io.rst b/doc/source/io.rst
index c73240725887f..bf9c913909dee 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1030,6 +1030,17 @@ Deletion of the object specified by the key
del store['wp']
store
+Closing a Store
+
+.. ipython:: python
+
+
+ # closing a store
+ store.close()
+
+ # Working with, and automatically closing the store with the context manager.
+ with get_store('store.h5') as store:
+ store.keys()
.. ipython:: python
:suppress:
@@ -1095,7 +1106,7 @@ Storing Mixed Types in a Table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Storing mixed-dtype data is supported. Strings are store as a fixed-width using the maximum size of the appended column. Subsequent appends will truncate strings at this length.
-Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set a larger minimum for the string columns. Storing ``floats, strings, ints, bools`` are currently supported.
+Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set a larger minimum for the string columns. Storing ``floats, strings, ints, bools, datetime64`` are currently supported. For string columns, passing ``nan_rep = 'my_nan_rep'`` to append will change the default nan representation on disk (which converts to/from `np.nan`), this defaults to `nan`.
.. ipython:: python
@@ -1103,6 +1114,11 @@ Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set
df_mixed['string'] = 'string'
df_mixed['int'] = 1
df_mixed['bool'] = True
+ df_mixed['datetime64'] = Timestamp('20010102')
+
+ # make sure that we have datetime64[ns] types
+ df_mixed = df_mixed.convert_objects()
+ df_mixed.ix[3:5,['A','B','string','datetime64']] = np.nan
store.append('df_mixed', df_mixed, min_itemsize = { 'values' : 50 })
df_mixed1 = store.select('df_mixed')
@@ -1112,10 +1128,33 @@ Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set
# we have provided a minimum string column size
store.root.df_mixed.table
+It is ok to store ``np.nan`` in a ``float or string``. Make sure to do a ``convert_objects()`` on the frame before storing a ``np.nan`` in a datetime64 column. Storing a column with a ``np.nan`` in a ``int, bool`` will currently throw an ``Exception`` as these columns will have converted to ``object`` type.
+
+Storing Multi-Index DataFrames
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Storing multi-index dataframes as tables is very similar to storing/selecting from homogenous index DataFrames.
+
+.. ipython:: python
+
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ['one', 'two', 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['foo', 'bar'])
+ df_mi = DataFrame(np.random.randn(10, 3), index=index,
+ columns=['A', 'B', 'C'])
+ df_mi
+
+ store.append('df_mi',df_mi)
+ store.select('df_mi')
+
+ # the levels are automatically included as data columns
+ store.select('df_mi', Term('foo=bar'))
+
Querying a Table
~~~~~~~~~~~~~~~~
-
``select`` and ``delete`` operations have an optional criteria that can be specified to select/delete only
a subset of the data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
@@ -1128,7 +1167,7 @@ Valid terms can be created from ``dict, list, tuple, or string``. Objects can be
- ``dict(field = 'index', op = '>', value = '20121114')``
- ``('index', '>', '20121114')``
- - ``'index>20121114'``
+ - ``'index > 20121114'``
- ``('index', '>', datetime(2012,11,14))``
- ``('index', ['20121114','20121115'])``
- ``('major_axis', '=', Timestamp('2012/11/14'))``
@@ -1143,14 +1182,30 @@ Queries are built up using a list of ``Terms`` (currently only **anding** of ter
store
store.select('wp',[ Term('major_axis>20000102'), Term('minor_axis', '=', ['A','B']) ])
+The ``columns`` keyword can be supplied to select to filter a list of the return columns, this is equivalent to passing a ``Term('columns',list_of_columns_to_filter)``
+
+.. ipython:: python
+
+ store.select('df', columns = ['A','B'])
+
+Start and Stop parameters can be specified to limit the total search space. These are in terms of the total number of rows in a table.
+
+.. ipython:: python
+
+ # this is effectively what the storage of a Panel looks like
+ wp.to_frame()
+
+ # limiting the search
+ store.select('wp',[ Term('major_axis>20000102'), Term('minor_axis', '=', ['A','B']) ], start=0, stop=10)
+
+
Indexing
~~~~~~~~
-You can create an index for a table with ``create_table_index`` after data is already in the table (after and ``append/put`` operation). Creating a table index is **highly** encouraged. This will speed your queries a great deal when you use a ``select`` with the indexed dimension as the ``where``. It is not automagically done now because you may want to index different axes than the default (except in the case of a DataFrame, where it almost always makes sense to index the ``index``.
+You can create/modify an index for a table with ``create_table_index`` after data is already in the table (after and ``append/put`` operation). Creating a table index is **highly** encouraged. This will speed your queries a great deal when you use a ``select`` with the indexed dimension as the ``where``. **Indexes are automagically created (starting 0.10.1)** on the indexables and any data columns you specify. This behavior can be turned off by passing ``index=False`` to ``append``.
.. ipython:: python
- # create an index
- store.create_table_index('df')
+ # we have automagically already created an index (in the first section)
i = store.root.df.table.cols.index.index
i.optlevel, i.kind
@@ -1160,6 +1215,90 @@ You can create an index for a table with ``create_table_index`` after data is al
i.optlevel, i.kind
+Query via Data Columns
+~~~~~~~~~~~~~~~~~~~~~~
+You can designate (and index) certain columns that you want to be able to perform queries (other than the `indexable` columns, which you can always query). For instance say you want to perform this common operation, on-disk, and return just the frame that matches this query.
+
+.. ipython:: python
+
+ df_dc = df.copy()
+ df_dc['string'] = 'foo'
+ df_dc.ix[4:6,'string'] = np.nan
+ df_dc.ix[7:9,'string'] = 'bar'
+ df_dc['string2'] = 'cool'
+ df_dc
+
+ # on-disk operations
+ store.append('df_dc', df_dc, data_columns = ['B','C','string','string2'])
+ store.select('df_dc',[ Term('B>0') ])
+
+ # getting creative
+ store.select('df_dc',[ 'B > 0', 'C > 0', 'string == foo' ])
+
+ # this is in-memory version of this type of selection
+ df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == 'foo')]
+
+ # we have automagically created this index and that the B/C/string/string2 columns are stored separately as ``PyTables`` columns
+ store.root.df_dc.table
+
+There is some performance degredation by making lots of columns into `data columns`, so it is up to the user to designate these. In addition, you cannot change data columns (nor indexables) after the first append/put operation (Of course you can simply read in the data and create a new table!)
+
+Advanced Queries
+~~~~~~~~~~~~~~~~
+
+**Unique**
+
+To retrieve the *unique* values of an indexable or data column, use the method ``unique``. This will, for example, enable you to get the index very quickly. Note ``nan`` are excluded from the result set.
+
+.. ipython:: python
+
+ store.unique('df_dc','index')
+ store.unique('df_dc','string')
+
+**Replicating or**
+
+``not`` and ``or`` conditions are unsupported at this time; however, ``or`` operations are easy to replicate, by repeately applying the criteria to the table, and then ``concat`` the results.
+
+.. ipython:: python
+
+ crit1 = [ Term('B>0'), Term('C>0'), Term('string=foo') ]
+ crit2 = [ Term('B<0'), Term('C>0'), Term('string=foo') ]
+
+ concat([ store.select('df_dc',c) for c in [ crit1, crit2 ] ])
+
+**Table Object**
+
+If you want to inspect the table object, retrieve via ``get_table``. You could use this progamatically to say get the number of rows in the table.
+
+.. ipython:: python
+
+ store.get_table('df_dc').nrows
+
+Multiple Table Queries
+~~~~~~~~~~~~~~~~~~~~~~
+
+New in 0.10.1 are the methods ``append_to_multple`` and ``select_as_multiple``, that can perform appending/selecting from multiple tables at once. The idea is to have one table (call it the selector table) that you index most/all of the columns, and perform your queries. The other table(s) are data tables that are indexed the same the selector table. You can then perform a very fast query on the selector table, yet get lots of data back. This method works similar to having a very wide-table, but is more efficient in terms of queries.
+
+Note, **THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES**. This means, append to the tables in the same order; ``append_to_multiple`` splits a single object to multiple tables, given a specification (as a dictionary). This dictionary is a mapping of the table names to the 'columns' you want included in that table. Pass a `None` for a single table (optional) to let it have the remaining columns. The argument ``selector`` defines which table is the selector table.
+
+.. ipython:: python
+
+ df_mt = DataFrame(randn(8, 6), index=date_range('1/1/2000', periods=8),
+ columns=['A', 'B', 'C', 'D', 'E', 'F'])
+ df_mt['foo'] = 'bar'
+
+ # you can also create the tables individually
+ store.append_to_multiple({ 'df1_mt' : ['A','B'], 'df2_mt' : None }, df_mt, selector = 'df1_mt')
+ store
+
+ # indiviual tables were created
+ store.select('df1_mt')
+ store.select('df2_mt')
+
+ # as a multiple
+ store.select_as_multiple(['df1_mt','df2_mt'], where = [ 'A>0','B>0' ], selector = 'df1_mt')
+
+
Delete from a Table
~~~~~~~~~~~~~~~~~~~
You can delete from a table selectively by specifying a ``where``. In deleting rows, it is important to understand the ``PyTables`` deletes rows by erasing the rows, then **moving** the following data. Thus deleting can potentially be a very expensive operation depending on the orientation of your data. This is especially true in higher dimensional objects (``Panel`` and ``Panel4D``). To get optimal deletion speed, it pays to have the dimension you are deleting be the first of the ``indexables``.
@@ -1184,6 +1323,33 @@ It should be clear that a delete operation on the ``major_axis`` will be fairly
store.remove('wp', 'major_axis>20000102' )
store.select('wp')
+Please note that HDF5 **DOES NOT RECLAIM SPACE** in the h5 files automatically. Thus, repeatedly deleting (or removing nodes) and adding again **WILL TEND TO INCREASE THE FILE SIZE**. To *clean* the file, use ``ptrepack`` (see below).
+
+Compression
+~~~~~~~~~~~
+``PyTables`` allows the stored data to be compressed. Tthis applies to all kinds of stores, not just tables.
+
+ - Pass ``complevel=int`` for a compression level (1-9, with 0 being no compression, and the default)
+ - Pass ``complib=lib`` where lib is any of ``zlib, bzip2, lzo, blosc`` for whichever compression library you prefer.
+
+``HDFStore`` will use the file based compression scheme if no overriding ``complib`` or ``complevel`` options are provided. ``blosc`` offers very fast compression, and is my most used. Note that ``lzo`` and ``bzip2`` may not be installed (by Python) by default.
+
+Compression for all objects within the file
+
+ - ``store_compressed = HDFStore('store_compressed.h5', complevel=9, complib='blosc')``
+
+Or on-the-fly compression (this only applies to tables). You can turn off file compression for a specific table by passing ``complevel=0``
+
+ - ``store.append('df', df, complib='zlib', complevel=5)``
+
+**ptrepack**
+
+``PyTables`` offer better write performance when compressed after writing them, as opposed to turning on compression at the very beginning. You can use the supplied ``PyTables`` utility ``ptrepack``. In addition, ``ptrepack`` can change compression levels after the fact.
+
+ - ``ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5``
+
+Furthermore ``ptrepack in.h5 out.h5`` will *repack* the file to allow you to reuse previously deleted space (alternatively, one can simply remove the file and write again).
+
Notes & Caveats
~~~~~~~~~~~~~~~
@@ -1216,14 +1382,9 @@ Performance
- ``Tables`` come with a writing performance penalty as compared to regular stores. The benefit is the ability to append/delete and query (potentially very large amounts of data).
Write times are generally longer as compared with regular stores. Query times can be quite fast, especially on an indexed axis.
- - ``Tables`` can (as of 0.10.0) be expressed as different types.
-
- - ``AppendableTable`` which is a similiar table to past versions (this is the default).
- - ``WORMTable`` (pending implementation) - is available to faciliate very fast writing of tables that are also queryable (but CANNOT support appends)
-
- - ``Tables`` offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
- use the pytables utilities ``ptrepack`` to rewrite the file (and also can change compression methods)
- - Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
+ - You can pass ``chunksize=an integer`` to ``append``, to change the writing chunksize (default is 50000). This will signficantly lower your memory usage on writing.
+ - You can pass ``expectedrows=an integer`` to the first ``append``, to set the TOTAL number of expectedrows that ``PyTables`` will expected. This will optimize read/write performance.
+ - Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
Experimental
~~~~~~~~~~~~
diff --git a/doc/source/v0.10.1.txt b/doc/source/v0.10.1.txt
new file mode 100644
index 0000000000000..b8137fda540cd
--- /dev/null
+++ b/doc/source/v0.10.1.txt
@@ -0,0 +1,129 @@
+.. _whatsnew_0101:
+
+v0.10.1 (January ??, 2013)
+---------------------------
+
+This is a minor release from 0.10.0 and includes many new features and
+enhancements along with a large number of bug fixes. There are also a number of
+important API changes that long-time pandas users should pay close attention
+to.
+
+API changes
+~~~~~~~~~~~
+
+New features
+~~~~~~~~~~~~
+
+HDFStore
+~~~~~~~~
+
+.. ipython:: python
+ :suppress:
+ :okexcept:
+
+ os.remove('store.h5')
+
+You can designate (and index) certain columns that you want to be able to perform queries on a table, by passing a list to ``data_columns``
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
+ columns=['A', 'B', 'C'])
+ df['string'] = 'foo'
+ df.ix[4:6,'string'] = np.nan
+ df.ix[7:9,'string'] = 'bar'
+ df['string2'] = 'cool'
+ df
+
+ # on-disk operations
+ store.append('df', df, data_columns = ['B','C','string','string2'])
+ store.select('df',[ 'B > 0', 'string == foo' ])
+
+ # this is in-memory version of this type of selection
+ df[(df.B > 0) & (df.string == 'foo')]
+
+Retrieving unique values in an indexable or data column.
+
+.. ipython:: python
+
+ store.unique('df','index')
+ store.unique('df','string')
+
+You can now store ``datetime64`` in data columns
+
+.. ipython:: python
+
+ df_mixed = df.copy()
+ df_mixed['datetime64'] = Timestamp('20010102')
+ df_mixed.ix[3:4,['A','B']] = np.nan
+
+ store.append('df_mixed', df_mixed)
+ df_mixed1 = store.select('df_mixed')
+ df_mixed1
+ df_mixed1.get_dtype_counts()
+
+You can pass ``columns`` keyword to select to filter a list of the return columns, this is equivalent to passing a ``Term('columns',list_of_columns_to_filter)``
+
+.. ipython:: python
+
+ store.select('df',columns = ['A','B'])
+
+``HDFStore`` now serializes multi-index dataframes when appending tables.
+
+.. ipython:: python
+
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ['one', 'two', 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['foo', 'bar'])
+ df = DataFrame(np.random.randn(10, 3), index=index,
+ columns=['A', 'B', 'C'])
+ df
+
+ store.append('mi',df)
+ store.select('mi')
+
+ # the levels are automatically included as data columns
+ store.select('mi', Term('foo=bar'))
+
+Multi-table creation via ``append_to_multiple`` and selection via ``select_as_multiple`` can create/select from multiple tables and return a combined result, by using ``where`` on a selector table.
+
+.. ipython:: python
+
+ df_mt = DataFrame(randn(8, 6), index=date_range('1/1/2000', periods=8),
+ columns=['A', 'B', 'C', 'D', 'E', 'F'])
+ df_mt['foo'] = 'bar'
+
+ # you can also create the tables individually
+ store.append_to_multiple({ 'df1_mt' : ['A','B'], 'df2_mt' : None }, df_mt, selector = 'df1_mt')
+ store
+
+ # indiviual tables were created
+ store.select('df1_mt')
+ store.select('df2_mt')
+
+ # as a multiple
+ store.select_as_multiple(['df1_mt','df2_mt'], where = [ 'A>0','B>0' ], selector = 'df1_mt')
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
+
+**Enhancements**
+
+- You can pass ``nan_rep = 'my_nan_rep'`` to append, to change the default nan representation on disk (which converts to/from `np.nan`), this defaults to `nan`.
+- You can pass ``index`` to ``append``. This defaults to ``True``. This will automagically create indicies on the *indexables* and *data columns* of the table
+- You can pass ``chunksize=an integer`` to ``append``, to change the writing chunksize (default is 50000). This will signficantly lower your memory usage on writing.
+- You can pass ``expectedrows=an integer`` to the first ``append``, to set the TOTAL number of expectedrows that ``PyTables`` will expected. This will optimize read/write performance.
+- ``Select`` now supports passing ``start`` and ``stop`` to provide selection space limiting in selection.
+
+
+See the `full release notes
+<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
+on GitHub for a complete list.
+
diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index 82ed64680f1eb..6c125c45a2599 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -16,6 +16,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: v0.10.1.txt
+
.. include:: v0.10.0.txt
.. include:: v0.9.1.txt
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 1d45727257eeb..6c58c708b8306 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -32,7 +32,7 @@
from pandas.io.parsers import (read_csv, read_table, read_clipboard,
read_fwf, to_clipboard, ExcelFile,
ExcelWriter)
-from pandas.io.pytables import HDFStore, Term
+from pandas.io.pytables import HDFStore, Term, get_store
from pandas.util.testing import debug
from pandas.tools.describe import value_range
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 175845fd38a2b..346dfb7c8b4ce 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -21,7 +21,6 @@
from pandas.tseries.api import PeriodIndex, DatetimeIndex
from pandas.core.common import adjoin
from pandas.core.algorithms import match, unique, factorize
-from pandas.core.strings import str_len
from pandas.core.categorical import Categorical
from pandas.core.common import _asarray_tuplesafe, _try_sort
from pandas.core.internals import BlockManager, make_block, form_blocks
@@ -36,7 +35,7 @@
from contextlib import contextmanager
# versioning attribute
-_version = '0.10'
+_version = '0.10.1'
class IncompatibilityWarning(Warning): pass
@@ -79,6 +78,13 @@ class IncompatibilityWarning(Warning): pass
'WidePanel': 'wide_table',
}
+# axes map
+_AXES_MAP = {
+ DataFrame : [0],
+ Panel : [1,2],
+ Panel4D : [1,2,3],
+}
+
# oh the troubles to reduce import time
_table_mod = None
_table_supports_index = False
@@ -108,28 +114,7 @@ def get_store(path, mode='a', complevel=None, complib=None,
Parameters
----------
- path : string
- File path to HDF5 file
- mode : {'a', 'w', 'r', 'r+'}, default 'a'
-
- ``'r'``
- Read-only; no data can be modified.
- ``'w'``
- Write; a new file is created (an existing file with the same
- name would be deleted).
- ``'a'``
- Append; an existing file is opened for reading and writing,
- and if the file does not exist it is created.
- ``'r+'``
- It is similar to ``'a'``, but the file must already exist.
- complevel : int, 1-9, default 0
- If a complib is specified compression will be applied
- where possible
- complib : {'zlib', 'bzip2', 'lzo', 'blosc', None}, default None
- If complevel is > 0 apply compression to objects written
- in the store wherever possible
- fletcher32 : bool, default False
- If applying compression use the fletcher32 checksum
+ same as HDFStore
Examples
--------
@@ -336,7 +321,7 @@ def get(self, key):
raise KeyError('No object named %s in the file' % key)
return self._read_group(group)
- def select(self, key, where=None, **kwargs):
+ def select(self, key, where=None, start=None, stop=None, columns=None, **kwargs):
"""
Retrieve pandas object stored in file, optionally based on where
criteria
@@ -344,16 +329,102 @@ def select(self, key, where=None, **kwargs):
Parameters
----------
key : object
+
+ Optional Parameters
+ -------------------
where : list of Term (or convertable) objects, optional
+ start : integer (defaults to None), row number to start selection
+ stop : integer (defaults to None), row number to stop selection
+ columns : a list of columns that if not None, will limit the return columns
"""
group = self.get_node(key)
if group is None:
raise KeyError('No object named %s in the file' % key)
- return self._read_group(group, where, **kwargs)
+ return self._read_group(group, where=where, start=start, stop=stop, columns=columns, **kwargs)
+
+ def select_as_coordinates(self, key, where=None, **kwargs):
+ """
+ return the selection as a Coordinates. Note that start/stop/columns parematers are inapplicable here.
+
+ Parameters
+ ----------
+ key : object
- def put(self, key, value, table=False, append=False,
- compression=None, **kwargs):
+ Optional Parameters
+ -------------------
+ where : list of Term (or convertable) objects, optional
+ """
+ return self.get_table(key).read_coordinates(where = where, **kwargs)
+
+ def unique(self, key, column, **kwargs):
+ """
+ return a single column uniquely from the table. This is generally only useful to select an indexable
+
+ Parameters
+ ----------
+ key : object
+ column: the column of interest
+
+ Exceptions
+ ----------
+ raises KeyError if the column is not found (or key is not a valid store)
+ raises ValueError if the column can not be extracted indivually (it is part of a data block)
+
+ """
+ return self.get_table(key).read_column(column = column, **kwargs)
+
+ def select_as_multiple(self, keys, where=None, selector=None, columns=None, **kwargs):
+ """ Retrieve pandas objects from multiple tables
+
+ Parameters
+ ----------
+ keys : a list of the tables
+ selector : the table to apply the where criteria (defaults to keys[0] if not supplied)
+ columns : the columns I want back
+
+ Exceptions
+ ----------
+ raise if any of the keys don't refer to tables or if they are not ALL THE SAME DIMENSIONS
+ """
+
+ # default to single select
+ if isinstance(keys, (list,tuple)) and len(keys) == 1:
+ keys = keys[0]
+ if isinstance(keys,basestring):
+ return self.select(key = keys, where=where, columns = columns, **kwargs)
+
+ if not isinstance(keys, (list,tuple)):
+ raise Exception("keys must be a list/tuple")
+
+ if len(keys) == 0:
+ raise Exception("keys must have a non-zero length")
+
+ if selector is None:
+ selector = keys[0]
+
+ # collect the tables
+ tbls = [ self.get_table(k) for k in keys ]
+
+ # validate rows
+ nrows = tbls[0].nrows
+ for t in tbls:
+ if t.nrows != nrows:
+ raise Exception("all tables must have exactly the same nrows!")
+
+ # select coordinates from the selector table
+ c = self.select_as_coordinates(selector, where)
+
+ # collect the returns objs
+ objs = [ t.read(where = c, columns = columns) for t in tbls ]
+
+ # axis is the concentation axes
+ axis = list(set([ t.non_index_axes[0][0] for t in tbls ]))[0]
+
+ # concat and return
+ return concat(objs, axis = axis, verify_integrity = True)
+
+ def put(self, key, value, table=False, append=False, **kwargs):
"""
Store object in HDFStore
@@ -368,15 +439,10 @@ def put(self, key, value, table=False, append=False,
append : boolean, default False
For table data structures, append the input data to the existing
table
- compression : {None, 'blosc', 'lzo', 'zlib'}, default None
- Use a compression algorithm to compress the data
- If None, the compression settings specified in the ctor will
- be used.
"""
- self._write_to_group(key, value, table=table, append=append,
- comp=compression, **kwargs)
+ self._write_to_group(key, value, table=table, append=append, **kwargs)
- def remove(self, key, where=None):
+ def remove(self, key, where=None, start=None, stop=None):
"""
Remove pandas object partially by specifying the where condition
@@ -384,9 +450,12 @@ def remove(self, key, where=None):
----------
key : string
Node to remove or delete rows from
- where : list
- For Table node, delete specified rows. See HDFStore.select for more
- information
+
+ Optional Parameters
+ -------------------
+ where : list of Term (or convertable) objects, optional
+ start : integer (defaults to None), row number to start selection
+ stop : integer (defaults to None), row number to stop selection
Returns
-------
@@ -406,11 +475,11 @@ def remove(self, key, where=None):
if not _is_table_type(group):
raise Exception('can only remove with where on objects written as tables')
t = create_table(self, group)
- return t.delete(where)
+ return t.delete(where = where, start=start, stop=stop)
return None
- def append(self, key, value, **kwargs):
+ def append(self, key, value, columns = None, **kwargs):
"""
Append to Table in file. Node must already exist and be Table
format.
@@ -418,15 +487,83 @@ def append(self, key, value, **kwargs):
Parameters
----------
key : object
- value : {Series, DataFrame, Panel}
+ value : {Series, DataFrame, Panel, Panel4D}
+
+ Optional Parameters
+ -------------------
+ data_columns : list of columns to create as data columns
+ min_itemsize : dict of columns that specify minimum string sizes
+ nan_rep : string to use as string nan represenation
+ chunksize : size to chunk the writing
+ expectedrows : expected TOTAL row size of this table
Notes
-----
Does *not* check if data being appended overlaps with existing
data in the table, so be careful
"""
+ if columns is not None:
+ raise Exception("columns is not a supported keyword in append, try data_columns")
+
self._write_to_group(key, value, table=True, append=True, **kwargs)
+ def append_to_multiple(self, d, value, selector, data_columns = None, axes = None, **kwargs):
+ """
+ Append to multiple tables
+
+ Parameters
+ ----------
+ d : a dict of table_name to table_columns, None is acceptable as the values of one node (this will get all the remaining columns)
+ value : a pandas object
+ selector : a string that designates the indexable table; all of its columns will be designed as data_columns, unless data_columns is passed,
+ in which case these are used
+
+ Notes
+ -----
+ axes parameter is currently not accepted
+
+ """
+ if axes is not None:
+ raise Exception("axes is currently not accepted as a paremter to append_to_multiple; you can create the tables indepdently instead")
+
+ if not isinstance(d, dict):
+ raise Exception("append_to_multiple must have a dictionary specified as the way to split the value")
+
+ if selector not in d:
+ raise Exception("append_to_multiple requires a selector that is in passed dict")
+
+ # figure out the splitting axis (the non_index_axis)
+ axis = list(set(range(value.ndim))-set(_AXES_MAP[type(value)]))[0]
+
+ # figure out how to split the value
+ remain_key = None
+ remain_values = []
+ for k, v in d.items():
+ if v is None:
+ if remain_key is not None:
+ raise Exception("append_to_multiple can only have one value in d that is None")
+ remain_key = k
+ else:
+ remain_values.extend(v)
+ if remain_key is not None:
+ ordered = value.axes[axis]
+ ordd = ordered-Index(remain_values)
+ ordd = sorted(ordered.get_indexer(ordd))
+ d[remain_key] = ordered.take(ordd)
+
+ # data_columns
+ if data_columns is None:
+ data_columns = d[selector]
+
+ # append
+ for k, v in d.items():
+ dc = data_columns if k == selector else None
+
+ # compute the val
+ val = value.reindex_axis(v, axis = axis, copy = False)
+
+ self.append(k, val, data_columns = dc, **kwargs)
+
def create_table_index(self, key, **kwargs):
""" Create a pytables index on the table
Paramaters
@@ -464,13 +601,24 @@ def get_node(self, key):
except:
return None
+ def get_table(self, key):
+ """ return the table object for a key, raise if not in the file or a non-table """
+ group = self.get_node(key)
+ if group is None:
+ raise KeyError('No object named %s in the file' % key)
+ if not _is_table_type(group):
+ raise Exception("cannot return a table object for a non-table")
+ t = create_table(self, group)
+ t.infer_axes()
+ return t
+
###### private methods ######
def _get_handler(self, op, kind):
return getattr(self, '_%s_%s' % (op, kind))
def _write_to_group(self, key, value, table=False, append=False,
- comp=None, **kwargs):
+ complib=None, **kwargs):
group = self.get_node(key)
if group is None:
paths = key.split('/')
@@ -494,20 +642,19 @@ def _write_to_group(self, key, value, table=False, append=False,
kind = '%s_table' % kind
handler = self._get_handler(op='write', kind=kind)
wrapper = lambda value: handler(group, value, append=append,
- comp=comp, **kwargs)
+ complib=complib, **kwargs)
else:
if append:
raise ValueError('Can only append to Tables')
- if comp:
+ if complib:
raise ValueError('Compression only supported on Tables')
handler = self._get_handler(op='write', kind=kind)
wrapper = lambda value: handler(group, value)
- wrapper(value)
group._v_attrs.pandas_type = kind
group._v_attrs.pandas_version = _version
- #group._v_attrs.meta = getattr(value,'meta',None)
+ wrapper(value)
def _write_series(self, group, series):
self._write_index(group, 'index', series.index)
@@ -589,7 +736,7 @@ def _read_sparse_panel(self, group, where=None):
def _write_frame(self, group, df):
self._write_block_manager(group, df._data)
- def _read_frame(self, group, where=None):
+ def _read_frame(self, group, where=None, **kwargs):
return DataFrame(self._read_block_manager(group))
def _write_block_manager(self, group, data):
@@ -631,34 +778,39 @@ def _write_wide(self, group, panel):
panel._consolidate_inplace()
self._write_block_manager(group, panel._data)
- def _read_wide(self, group, where=None):
+ def _read_wide(self, group, where=None, **kwargs):
return Panel(self._read_block_manager(group))
- def _write_ndim_table(self, group, obj, append=False, comp=None, axes=None, **kwargs):
+ def _write_ndim_table(self, group, obj, append=False, axes=None, index=True, **kwargs):
if axes is None:
- axes = [1,2,3]
+ axes = _AXES_MAP[type(obj)]
t = create_table(self, group, typ = 'appendable_ndim')
- t.write(axes=axes, obj=obj,
- append=append, compression=comp, **kwargs)
+ t.write(axes=axes, obj=obj, append=append, **kwargs)
+ if index:
+ t.create_index(columns = index)
def _read_ndim_table(self, group, where=None, **kwargs):
t = create_table(self, group, **kwargs)
- return t.read(where)
+ return t.read(where, **kwargs)
- def _write_frame_table(self, group, df, append=False, comp=None, axes=None, **kwargs):
+ def _write_frame_table(self, group, df, append=False, axes=None, index=True, **kwargs):
if axes is None:
- axes = [0]
- t = create_table(self, group, typ = 'appendable_frame')
- t.write(axes=axes, obj=df, append=append, compression=comp, **kwargs)
+ axes = _AXES_MAP[type(df)]
+
+ t = create_table(self, group, typ = 'appendable_frame' if df.index.nlevels == 1 else 'appendable_multiframe')
+ t.write(axes=axes, obj=df, append=append, **kwargs)
+ if index:
+ t.create_index(columns = index)
_read_frame_table = _read_ndim_table
- def _write_wide_table(self, group, panel, append=False, comp=None, axes=None, **kwargs):
+ def _write_wide_table(self, group, panel, append=False, axes=None, index=True, **kwargs):
if axes is None:
- axes = [1,2]
+ axes = _AXES_MAP[type(panel)]
t = create_table(self, group, typ = 'appendable_panel')
- t.write(axes=axes, obj=panel,
- append=append, compression=comp, **kwargs)
+ t.write(axes=axes, obj=panel, append=append, **kwargs)
+ if index:
+ t.create_index(columns = index)
_read_wide_table = _read_ndim_table
@@ -847,14 +999,9 @@ def _read_group(self, group, where=None, **kwargs):
kind = group._v_attrs.pandas_type
kind = _LEGACY_MAP.get(kind, kind)
handler = self._get_handler(op='read', kind=kind)
- v = handler(group, where, **kwargs)
- #if v is not None:
- # meta = getattr(group._v_attrs,'meta',None)
- # if meta is not None:
- # v.meta = meta
- return v
-
- def _read_series(self, group, where=None):
+ return handler(group, where=where, **kwargs)
+
+ def _read_series(self, group, where=None, **kwargs):
index = self._read_index(group, 'index')
if len(index) > 0:
values = _read_array(group, 'values')
@@ -864,12 +1011,12 @@ def _read_series(self, group, where=None):
name = getattr(group._v_attrs, 'name', None)
return Series(values, index=index, name=name)
- def _read_legacy_series(self, group, where=None):
+ def _read_legacy_series(self, group, where=None, **kwargs):
index = self._read_index_legacy(group, 'index')
values = _read_array(group, 'values')
return Series(values, index=index)
- def _read_legacy_frame(self, group, where=None):
+ def _read_legacy_frame(self, group, where=None, **kwargs):
index = self._read_index_legacy(group, 'index')
columns = self._read_index_legacy(group, 'columns')
values = _read_array(group, 'values')
@@ -894,7 +1041,9 @@ class IndexCol(object):
pos : the position in the pytables
"""
- is_indexable = True
+ is_an_indexable = True
+ is_data_indexable = True
+ is_searchable = False
def __init__(self, values = None, kind = None, typ = None, cname = None, itemsize = None, name = None, axis = None, kind_attr = None, pos = None, **kwargs):
self.values = values
@@ -948,6 +1097,9 @@ def __eq__(self, other):
""" compare 2 col items """
return all([ getattr(self,a,None) == getattr(other,a,None) for a in ['name','cname','axis','pos'] ])
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
def copy(self):
new_self = copy.copy(self)
return new_self
@@ -959,10 +1111,20 @@ def infer(self, table):
new_self.get_attr()
return new_self
- def convert(self, values):
- """ set the values from this selection """
- self.values = Index(_maybe_convert(values[self.cname], self.kind))
-
+ def convert(self, values, nan_rep):
+ """ set the values from this selection: take = take ownership """
+ try:
+ values = values[self.cname]
+ except:
+ pass
+ self.values = Index(_maybe_convert(values, self.kind))
+ return self
+
+ def take_data(self):
+ """ return the values & release the memory """
+ self.values, values = None, self.values
+ return values
+
@property
def attrs(self):
return self.table._v_attrs
@@ -1006,7 +1168,7 @@ def validate_col(self, itemsize = None):
# validate this column for string truncation (or reset to the max size)
dtype = getattr(self,'dtype',None)
- if self.kind == 'string' or (dtype is not None and dtype.startswith('string')):
+ if self.kind == 'string':
c = self.col
if c is not None:
@@ -1044,14 +1206,31 @@ class DataCol(IndexCol):
data : the actual data
cname : the column name in the table to hold the data (typeically values)
"""
- is_indexable = False
+ is_an_indexable = False
+ is_data_indexable = False
+ is_searchable = False
@classmethod
- def create_for_block(cls, i, **kwargs):
+ def create_for_block(cls, i = None, name = None, cname = None, version = None, **kwargs):
""" return a new datacol with the block i """
- return cls(name = 'values_%d' % i, cname = 'values_block_%d' % i, **kwargs)
- def __init__(self, values = None, kind = None, typ = None, cname = None, data = None, **kwargs):
+ if cname is None:
+ cname = name or 'values_block_%d' % i
+ if name is None:
+ name = cname
+
+ # prior to 0.10.1, we named values blocks like: values_block_0 an the name values_0
+ try:
+ if version[0] == 0 and version[1] <= 10 and version[2] == 0:
+ m = re.search("values_block_(\d+)",name)
+ if m:
+ name = "values_%s" % m.groups()[0]
+ except:
+ pass
+
+ return cls(name = name, cname = cname, **kwargs)
+
+ def __init__(self, values = None, kind = None, typ = None, cname = None, data = None, block = None, **kwargs):
super(DataCol, self).__init__(values = values, kind = kind, typ = typ, cname = cname, **kwargs)
self.dtype = None
self.dtype_attr = "%s_dtype" % self.name
@@ -1064,17 +1243,100 @@ def __eq__(self, other):
""" compare 2 col items """
return all([ getattr(self,a,None) == getattr(other,a,None) for a in ['name','cname','dtype','pos'] ])
- def set_data(self, data):
+ def set_data(self, data, dtype = None):
self.data = data
if data is not None:
- if self.dtype is None:
+ if dtype is not None:
+ self.dtype = dtype
+ self.set_kind()
+ elif self.dtype is None:
self.dtype = data.dtype.name
+ self.set_kind()
def take_data(self):
""" return the data & release the memory """
self.data, data = None, self.data
return data
+ def set_kind(self):
+ # set my kind if we can
+ if self.dtype is not None:
+ if self.dtype.startswith('string'):
+ self.kind = 'string'
+ elif self.dtype.startswith('float'):
+ self.kind = 'float'
+ elif self.dtype.startswith('int'):
+ self.kind = 'integer'
+ elif self.dtype.startswith('date'):
+ self.kind = 'datetime'
+
+ def set_atom(self, block, existing_col, min_itemsize, nan_rep, **kwargs):
+ """ create and setup my atom from the block b """
+
+ self.values = list(block.items)
+ dtype = block.dtype.name
+ inferred_type = lib.infer_dtype(block.values.flatten())
+
+ if inferred_type == 'datetime64':
+ self.set_atom_datetime64(block)
+ elif inferred_type == 'date':
+ raise NotImplementedError("date is not implemented as a table column")
+ elif inferred_type == 'unicode':
+ raise NotImplementedError("unicode is not implemented as a table column")
+
+ ### this is basically a catchall; if say a datetime64 has nans then will end up here ###
+ elif inferred_type == 'string' or dtype == 'object':
+ self.set_atom_string(block, existing_col, min_itemsize, nan_rep)
+ else:
+ self.set_atom_data(block)
+
+ return self
+
+ def get_atom_string(self, block, itemsize):
+ return _tables().StringCol(itemsize = itemsize, shape = block.shape[0])
+
+ def set_atom_string(self, block, existing_col, min_itemsize, nan_rep):
+ # fill nan items with myself
+ data = block.fillna(nan_rep).values
+
+ # itemsize is the maximum length of a string (along any dimension)
+ itemsize = lib.max_len_string_array(data.flatten())
+
+ # specified min_itemsize?
+ if isinstance(min_itemsize, dict):
+ min_itemsize = int(min_itemsize.get(self.name) or min_itemsize.get('values') or 0)
+ itemsize = max(min_itemsize or 0,itemsize)
+
+ # check for column in the values conflicts
+ if existing_col is not None:
+ eci = existing_col.validate_col(itemsize)
+ if eci > itemsize:
+ itemsize = eci
+
+ self.itemsize = itemsize
+ self.kind = 'string'
+ self.typ = self.get_atom_string(block, itemsize)
+ self.set_data(self.convert_string_data(data, itemsize))
+
+ def convert_string_data(self, data, itemsize):
+ return data.astype('S%s' % itemsize)
+
+ def get_atom_data(self, block):
+ return getattr(_tables(),"%sCol" % self.kind.capitalize())(shape = block.shape[0])
+
+ def set_atom_data(self, block):
+ self.kind = block.dtype.name
+ self.typ = self.get_atom_data(block)
+ self.set_data(block.values.astype(self.typ._deftype))
+
+ def get_atom_datetime64(self, block):
+ return _tables().Int64Col(shape = block.shape[0])
+
+ def set_atom_datetime64(self, block):
+ self.kind = 'datetime64'
+ self.typ = self.get_atom_datetime64(block)
+ self.set_data(block.values.view('i8'),'datetime64')
+
@property
def shape(self):
return getattr(self.data,'shape',None)
@@ -1099,21 +1361,42 @@ def validate_attr(self, append):
raise Exception("appended items dtype do not match existing items dtype"
" in table!")
- def convert(self, values):
+ def convert(self, values, nan_rep):
""" set the data from this selection (and convert to the correct dtype if we can) """
- self.set_data(values[self.cname])
+ try:
+ values = values[self.cname]
+ except:
+ pass
+ self.set_data(values)
# convert to the correct dtype
if self.dtype is not None:
- try:
- self.data = self.data.astype(self.dtype)
- except:
- self.data = self.data.astype('O')
+
+ # reverse converts
+ if self.dtype == 'datetime64':
+ self.data = np.asarray(self.data, dtype='M8[ns]')
+ elif self.dtype == 'date':
+ self.data = np.array([date.fromtimestamp(v) for v in self.data], dtype=object)
+ elif self.dtype == 'datetime':
+ self.data = np.array([datetime.fromtimestamp(v) for v in self.data],
+ dtype=object)
+ else:
+
+ try:
+ self.data = self.data.astype(self.dtype)
+ except:
+ self.data = self.data.astype('O')
+
+ # convert nans
+ if self.kind == 'string':
+ self.data = lib.array_replace_from_nan_rep(self.data.flatten(), nan_rep).reshape(self.data.shape)
+ return self
def get_attr(self):
""" get the data for this colummn """
self.values = getattr(self.attrs,self.kind_attr,None)
self.dtype = getattr(self.attrs,self.dtype_attr,None)
+ self.set_kind()
def set_attr(self):
""" set the data for this colummn """
@@ -1121,6 +1404,23 @@ def set_attr(self):
if self.dtype is not None:
setattr(self.attrs,self.dtype_attr,self.dtype)
+class DataIndexableCol(DataCol):
+ """ represent a data column that can be indexed """
+ is_data_indexable = True
+
+ @property
+ def is_searchable(self):
+ return self.kind == 'string'
+
+ def get_atom_string(self, block, itemsize):
+ return _tables().StringCol(itemsize = itemsize)
+
+ def get_atom_data(self, block):
+ return getattr(_tables(),"%sCol" % self.kind.capitalize())()
+
+ def get_atom_datetime64(self, block):
+ return _tables().Int64Col()
+
class Table(object):
""" represent a table:
facilitate read/write of various types of tables
@@ -1137,22 +1437,37 @@ class Table(object):
These are attributes that are store in the main table node, they are necessary
to recreate these tables when read back in.
- index_axes: a list of tuples of the (original indexing axis and index column)
+ index_axes : a list of tuples of the (original indexing axis and index column)
non_index_axes: a list of tuples of the (original index axis and columns on a non-indexing axis)
- values_axes : a list of the columns which comprise the data of this table
+ values_axes : a list of the columns which comprise the data of this table
+ data_columns : a list of the columns that we are allowing indexing (these become single columns in values_axes)
+ nan_rep : the string to use for nan representations for string objects
+ levels : the names of levels
"""
table_type = None
obj_type = None
ndim = None
+ levels = 1
def __init__(self, parent, group, **kwargs):
self.parent = parent
self.group = group
- self.version = getattr(group._v_attrs,'pandas_version',None)
+
+ # compute our version
+ version = getattr(group._v_attrs,'pandas_version',None)
+ try:
+ self.version = tuple([ int(x) for x in version.split('.') ])
+ if len(self.version) == 2:
+ self.version = self.version + (0,)
+ except:
+ self.version = (0,0,0)
+
self.index_axes = []
self.non_index_axes = []
self.values_axes = []
+ self.data_columns = []
+ self.nan_rep = None
self.selection = None
@property
@@ -1166,7 +1481,12 @@ def pandas_type(self):
def __repr__(self):
""" return a pretty representatgion of myself """
self.infer_axes()
- return "%s (typ->%s,nrows->%s,indexers->[%s])" % (self.pandas_type,self.table_type_short,self.nrows,','.join([ a.name for a in self.index_axes ]))
+ dc = ",dc->[%s]" % ','.join(self.data_columns) if len(self.data_columns) else ''
+ return "%s (typ->%s,nrows->%s,indexers->[%s]%s)" % (self.pandas_type,
+ self.table_type_short,
+ self.nrows,
+ ','.join([ a.name for a in self.index_axes ]),
+ dc)
__str__ = __repr__
@@ -1190,6 +1510,11 @@ def validate(self, other):
def nrows(self):
return getattr(self.table,'nrows',None)
+ @property
+ def nrows_expected(self):
+ """ based on our axes, compute the expected nrows """
+ return np.prod([ i.cvalues.shape[0] for i in self.index_axes ])
+
@property
def table(self):
""" return the table group """
@@ -1242,7 +1567,12 @@ def data_orientation(self):
def queryables(self):
""" return a dict of the kinds allowable columns for this object """
- return dict([ (a.cname,a.kind) for a in self.index_axes ] + [ (self.obj_type._AXIS_NAMES[axis],None) for axis, values in self.non_index_axes ])
+
+ # compute the values_axes queryables
+ return dict([ (a.cname,a.kind) for a in self.index_axes ] +
+ [ (self.obj_type._AXIS_NAMES[axis],None) for axis, values in self.non_index_axes ] +
+ [ (v.cname,v.kind) for v in self.values_axes if v.name in set(self.data_columns) ]
+ )
def index_cols(self):
""" return a list of my index cols """
@@ -1258,12 +1588,15 @@ def set_attrs(self):
self.attrs.index_cols = self.index_cols()
self.attrs.values_cols = self.values_cols()
self.attrs.non_index_axes = self.non_index_axes
+ self.attrs.data_columns = self.data_columns
+ self.attrs.nan_rep = self.nan_rep
+ self.attrs.levels = self.levels
def validate_version(self, where = None):
""" are we trying to operate on an old version? """
if where is not None:
- if self.version is None or float(self.version) < 0.1:
- warnings.warn("where criteria is being ignored as we this version is too old (or not-defined) [%s]" % self.version, IncompatibilityWarning)
+ if self.version[0] <= 0 and self.version[1] <= 10 and self.version[2] < 1:
+ warnings.warn("where criteria is being ignored as we this version is too old (or not-defined) [%s]" % '.'.join([ str(x) for x in self.version ]), IncompatibilityWarning)
@property
def indexables(self):
@@ -1276,21 +1609,28 @@ def indexables(self):
# index columns
self._indexables.extend([ IndexCol(name = name, axis = axis, pos = i) for i, (axis, name) in enumerate(self.attrs.index_cols) ])
- # data columns
+ # values columns
+ dc = set(self.data_columns)
base_pos = len(self._indexables)
- self._indexables.extend([ DataCol.create_for_block(i = i, pos = base_pos + i ) for i, c in enumerate(self.attrs.values_cols) ])
+ def f(i, c):
+ klass = DataCol
+ if c in dc:
+ klass = DataIndexableCol
+ return klass.create_for_block(i = i, name = c, pos = base_pos + i, version = self.version)
+
+ self._indexables.extend([ f(i,c) for i, c in enumerate(self.attrs.values_cols) ])
return self._indexables
def create_index(self, columns = None, optlevel = None, kind = None):
"""
Create a pytables index on the specified columns
- note: cannot index Time64Col() currently; PyTables must be >= 2.3.1
+ note: cannot index Time64Col() currently; PyTables must be >= 2.3
Paramaters
----------
- columns : None or list_like (the indexers to index)
+ columns : False (don't create an index), True (create all columns index), None or list_like (the indexers to index)
optlevel: optimization level (defaults to 6)
kind : kind of index (defaults to 'medium')
@@ -1301,9 +1641,11 @@ def create_index(self, columns = None, optlevel = None, kind = None):
"""
if not self.infer_axes(): return
+ if columns is False: return
- if columns is None:
- columns = [ self.index_axes[0].name ]
+ # index all indexables and data_columns
+ if columns is None or columns is True:
+ columns = [ a.cname for a in self.axes if a.is_data_indexable ]
if not isinstance(columns, (tuple,list)):
columns = [ columns ]
@@ -1338,7 +1680,7 @@ def create_index(self, columns = None, optlevel = None, kind = None):
if not v.is_indexed:
v.createIndex(**kw)
- def read_axes(self, where):
+ def read_axes(self, where, **kwargs):
""" create and return the axes sniffed from the table: return boolean for success """
# validate the version
@@ -1348,12 +1690,12 @@ def read_axes(self, where):
if not self.infer_axes(): return False
# create the selection
- self.selection = Selection(self, where)
+ self.selection = Selection(self, where = where, **kwargs)
values = self.selection.select()
# convert the data
for a in self.axes:
- a.convert(values)
+ a.convert(values, nan_rep = self.nan_rep)
return True
@@ -1365,28 +1707,42 @@ def infer_axes(self):
if table is None:
return False
- self.index_axes, self.values_axes = [ a.infer(self.table) for a in self.indexables if a.is_indexable ], [ a.infer(self.table) for a in self.indexables if not a.is_indexable ]
self.non_index_axes = getattr(self.attrs,'non_index_axes',None) or []
-
+ self.data_columns = getattr(self.attrs,'data_columns',None) or []
+ self.nan_rep = getattr(self.attrs,'nan_rep',None)
+ self.levels = getattr(self.attrs,'levels',None) or []
+ self.index_axes = [ a.infer(self.table) for a in self.indexables if a.is_an_indexable ]
+ self.values_axes = [ a.infer(self.table) for a in self.indexables if not a.is_an_indexable ]
return True
- def get_data_blocks(self, obj):
- """ return the data blocks for this obj """
- return obj._data.blocks
+ def get_object(self, obj):
+ """ return the data for this obj """
+ return obj
- def create_axes(self, axes, obj, validate = True, min_itemsize = None):
+ def create_axes(self, axes, obj, validate = True, nan_rep = None, data_columns = None, min_itemsize = None, **kwargs):
""" create and return the axes
leagcy tables create an indexable column, indexable index, non-indexable fields
+
+ Parameters:
+ -----------
+ axes: a list of the axes in order to create (names or numbers of the axes)
+ obj : the object to create axes on
+ validate: validate the obj against an existiing object already written
+ min_itemsize: a dict of the min size for a column in bytes
+ nan_rep : a values to use for string column nan_rep
+ data_columns : a list of columns that we want to create separate to allow indexing
"""
# map axes to numbers
axes = [ obj._get_axis_number(a) for a in axes ]
- # do we have an existing table (if so, use its axes)?
+ # do we have an existing table (if so, use its axes & data_columns)
if self.infer_axes():
existing_table = self.copy()
- axes = [ a.axis for a in existing_table.index_axes]
+ axes = [ a.axis for a in existing_table.index_axes]
+ data_columns = existing_table.data_columns
+ nan_rep = existing_table.nan_rep
else:
existing_table = None
@@ -1396,6 +1752,18 @@ def create_axes(self, axes, obj, validate = True, min_itemsize = None):
# create according to the new data
self.non_index_axes = []
+ self.data_columns = []
+
+ # nan_representation
+ if nan_rep is None:
+ nan_rep = 'nan'
+ self.nan_rep = nan_rep
+
+ # convert the objects if we can to better divine dtypes
+ try:
+ obj = obj.convert_objects()
+ except:
+ pass
# create axes to index and non_index
index_axes_map = dict()
@@ -1428,67 +1796,75 @@ def create_axes(self, axes, obj, validate = True, min_itemsize = None):
for a in self.axes:
a.maybe_set_size(min_itemsize = min_itemsize)
- # reindex by our non_index_axes
+ # reindex by our non_index_axes & compute data_columns
for a in self.non_index_axes:
- obj = obj.reindex_axis(a[1], axis = a[0], copy = False)
+ obj = obj.reindex_axis(a[1], axis = a[0], copy = False)
- blocks = self.get_data_blocks(obj)
+ # get out blocks
+ block_obj = self.get_object(obj)
+ blocks = None
+
+ if data_columns is not None and len(self.non_index_axes):
+ axis = self.non_index_axes[0][0]
+ axis_labels = self.non_index_axes[0][1]
+ data_columns = [ c for c in data_columns if c in axis_labels ]
+ if len(data_columns):
+ blocks = block_obj.reindex_axis(Index(axis_labels)-Index(data_columns), axis = axis, copy = False)._data.blocks
+ for c in data_columns:
+ blocks.extend(block_obj.reindex_axis([ c ], axis = axis, copy = False)._data.blocks)
+
+ if blocks is None:
+ blocks = block_obj._data.blocks
# add my values
self.values_axes = []
for i, b in enumerate(blocks):
# shape of the data column are the indexable axes
- shape = b.shape[0]
- values = b.values
+ klass = DataCol
+ name = None
- # a string column
- if b.dtype.name == 'object':
-
- # itemsize is the maximum length of a string (along any dimension)
- itemsize = _itemsize_string_array(values)
+ # we have a data_column
+ if data_columns and len(b.items) == 1 and b.items[0] in data_columns:
+ klass = DataIndexableCol
+ name = b.items[0]
+ self.data_columns.append(name)
- # specified min_itemsize?
- if isinstance(min_itemsize, dict):
- itemsize = max(int(min_itemsize.get('values')),itemsize)
-
- # check for column in the values conflicts
- if existing_table is not None and validate:
- eci = existing_table.values_axes[i].validate_col(itemsize)
- if eci > itemsize:
- itemsize = eci
-
- atom = _tables().StringCol(itemsize = itemsize, shape = shape)
- utype = 'S%s' % itemsize
- kind = 'string'
-
- else:
- atom = getattr(_tables(),"%sCol" % b.dtype.name.capitalize())(shape = shape)
- utype = atom._deftype
- kind = b.dtype.name
-
- # coerce data to this type
try:
- values = values.astype(utype)
+ existing_col = existing_table.values_axes[i] if existing_table is not None and validate else None
+
+ col = klass.create_for_block(i = i, name = name, version = self.version)
+ col.set_atom(block = b,
+ existing_col = existing_col,
+ min_itemsize = min_itemsize,
+ nan_rep = nan_rep,
+ **kwargs)
+ col.set_pos(j)
+
+ self.values_axes.append(col)
+ except (NotImplementedError):
+ raise
except (Exception), detail:
- raise Exception("cannot coerce data type -> [dtype->%s]" % b.dtype.name)
-
- dc = DataCol.create_for_block(i = i, values = list(b.items), kind = kind, typ = atom, data = values, pos = j)
+ raise Exception("cannot find the correct atom type -> [dtype->%s] %s" % (b.dtype.name,str(detail)))
j += 1
- self.values_axes.append(dc)
# validate the axes if we have an existing table
if validate:
self.validate(existing_table)
- def process_axes(self, obj):
+ def process_axes(self, obj, columns=None):
""" process axes filters """
+ # reorder by any non_index_axes & limit to the select columns
+ for axis,labels in self.non_index_axes:
+ if columns is not None:
+ labels = Index(labels) & Index(columns)
+ obj = obj.reindex_axis(labels,axis=axis,copy=False)
+
def reindex(obj, axis, filt, ordered):
- axis_name = obj._get_axis_name(axis)
ordd = ordered & filt
ordd = sorted(ordered.get_indexer(ordd))
- return obj.reindex_axis(ordered.take(ordd), axis = obj._get_axis_number(axis_name), copy = False)
+ return obj.reindex_axis(ordered.take(ordd), axis = obj._get_axis_number(axis), copy = False)
# apply the selection filters (but keep in the same order)
if self.selection.filter:
@@ -1497,21 +1873,23 @@ def reindex(obj, axis, filt, ordered):
return obj
- def create_description(self, compression = None, complevel = None):
+ def create_description(self, complib = None, complevel = None, fletcher32 = False, expectedrows = None):
""" create the description of the table from the axes & values """
- d = { 'name' : 'table' }
+ # expected rows estimate
+ if expectedrows is None:
+ expectedrows = max(self.nrows_expected,10000)
+ d = dict( name = 'table', expectedrows = expectedrows )
# description from the axes & values
d['description'] = dict([ (a.cname,a.typ) for a in self.axes ])
- if compression:
- complevel = self.complevel
+ if complib:
if complevel is None:
- complevel = 9
- filters = _tables().Filters(complevel=complevel,
- complib=compression,
- fletcher32=self.fletcher32)
+ complevel = self.complevel or 9
+ filters = _tables().Filters(complevel = complevel,
+ complib = complib,
+ fletcher32 = fletcher32 or self.fletcher32)
d['filters'] = filters
elif self.filters is not None:
d['filters'] = self.filters
@@ -1521,6 +1899,41 @@ def create_description(self, compression = None, complevel = None):
def read(self, **kwargs):
raise NotImplementedError("cannot read on an abstract table: subclasses should implement")
+ def read_coordinates(self, where=None, **kwargs):
+ """ select coordinates (row numbers) from a table; return the coordinates object """
+
+ # validate the version
+ self.validate_version(where)
+
+ # infer the data kind
+ if not self.infer_axes(): return False
+
+ # create the selection
+ self.selection = Selection(self, where = where, **kwargs)
+ return Coordinates(self.selection.select_coords(), group = self.group, where = where)
+
+ def read_column(self, column, **kwargs):
+ """ return a single column from the table, generally only indexables are interesting """
+
+ # validate the version
+ self.validate_version()
+
+ # infer the data kind
+ if not self.infer_axes(): return False
+
+ # find the axes
+ for a in self.axes:
+ if column == a.name:
+
+ if not a.is_data_indexable:
+ raise ValueError("column [%s] can not be extracted individually; it is not data indexable" % column)
+
+ # column must be an indexable or a data column
+ c = getattr(self.table.cols,column)
+ return Categorical.from_array(a.convert(c[:], nan_rep = self.nan_rep).take_data()).levels
+
+ raise KeyError("column [%s] not found in the table" % column)
+
def write(self, **kwargs):
raise NotImplementedError("cannot write on an abstract table")
@@ -1566,11 +1979,11 @@ class LegacyTable(Table):
def write(self, **kwargs):
raise Exception("write operations are not allowed on legacy tables!")
- def read(self, where=None):
+ def read(self, where=None, columns=None, **kwargs):
""" we have n indexable columns, with an arbitrary number of data axes """
- if not self.read_axes(where): return None
+ if not self.read_axes(where=where, **kwargs): return None
factors = [ Categorical.from_array(a.values) for a in self.index_axes ]
levels = [ f.levels for f in factors ]
@@ -1639,12 +2052,8 @@ def read(self, where=None):
else:
wp = concat(objs, axis = 0, verify_integrity = True)
- # reorder by any non_index_axes
- for axis,labels in self.non_index_axes:
- wp = wp.reindex_axis(labels,axis=axis,copy=False)
-
# apply the selection filters & axis orderings
- wp = self.process_axes(wp)
+ wp = self.process_axes(wp, columns=columns)
return wp
@@ -1665,8 +2074,9 @@ class AppendableTable(LegacyTable):
_indexables = None
table_type = 'appendable'
- def write(self, axes, obj, append=False, compression=None,
- complevel=None, min_itemsize = None, **kwargs):
+ def write(self, axes, obj, append=False, complib=None,
+ complevel=None, fletcher32=None, min_itemsize = None, chunksize = 50000,
+ expectedrows = None, **kwargs):
# create the table if it doesn't exist (or get it if it does)
if not append:
@@ -1674,12 +2084,15 @@ def write(self, axes, obj, append=False, compression=None,
self.handle.removeNode(self.group, 'table')
# create the axes
- self.create_axes(axes = axes, obj = obj, validate = append, min_itemsize = min_itemsize)
+ self.create_axes(axes = axes, obj = obj, validate = append, min_itemsize = min_itemsize, **kwargs)
if 'table' not in self.group:
# create the table
- options = self.create_description(compression = compression, complevel = complevel)
+ options = self.create_description(complib = complib,
+ complevel = complevel,
+ fletcher32 = fletcher32,
+ expectedrows = expectedrows)
# set the table attributes
self.set_attrs()
@@ -1695,10 +2108,9 @@ def write(self, axes, obj, append=False, compression=None,
a.validate_and_set(table, append)
# add the rows
- self.write_data()
- self.handle.flush()
+ self.write_data(chunksize)
- def write_data(self):
+ def write_data(self, chunksize):
""" fast writing of data: requires specific cython routines each axis shape """
# create the masks & values
@@ -1706,13 +2118,8 @@ def write_data(self):
for a in self.values_axes:
# figure the mask: only do if we can successfully process this column, otherwise ignore the mask
- try:
- mask = np.isnan(a.data).all(axis=0)
- masks.append(mask.astype('u1'))
- except:
-
- # need to check for Nan in a non-numeric type column!!!
- masks.append(np.zeros((a.data.shape[1:]), dtype = 'u1'))
+ mask = com.isnull(a.data).all(axis=0)
+ masks.append(mask.astype('u1'))
# consolidate masks
mask = masks[0]
@@ -1720,21 +2127,42 @@ def write_data(self):
m = mask & m
# the arguments
- args = [ a.cvalues for a in self.index_axes ]
- values = [ a.data for a in self.values_axes ]
+ indexes = [ a.cvalues for a in self.index_axes ]
+ search = np.array([ a.is_searchable for a in self.values_axes ]).astype('u1')
+ values = [ a.take_data() for a in self.values_axes ]
+
+ # write the chunks
+ rows = self.nrows_expected
+ chunks = int(rows / chunksize) + 1
+ for i in xrange(chunks):
+ start_i = i*chunksize
+ end_i = min((i+1)*chunksize,rows)
+
+ self.write_data_chunk(indexes = [ a[start_i:end_i] for a in indexes ],
+ mask = mask[start_i:end_i],
+ search = search,
+ values = [ v[:,start_i:end_i] for v in values ])
+
+ def write_data_chunk(self, indexes, mask, search, values):
# get our function
try:
func = getattr(lib,"create_hdf_rows_%sd" % self.ndim)
- args.append(mask)
- args.append(values)
+ args = list(indexes)
+ args.extend([ mask, search, values ])
rows = func(*args)
+ except (Exception), detail:
+ raise Exception("cannot create row-data -> %s" % str(detail))
+
+ try:
if len(rows):
self.table.append(rows)
+ self.table.flush()
except (Exception), detail:
+ import pdb; pdb.set_trace()
raise Exception("tables cannot write this data -> %s" % str(detail))
- def delete(self, where = None):
+ def delete(self, where = None, **kwargs):
# delete all rows (and return the nrows)
if where is None or not len(where):
@@ -1747,7 +2175,7 @@ def delete(self, where = None):
# create the selection
table = self.table
- self.selection = Selection(self, where)
+ self.selection = Selection(self, where, **kwargs)
values = self.selection.select_coords()
# delete the rows in reverse order
@@ -1779,7 +2207,7 @@ def delete(self, where = None):
table.removeRows(start = rows[rows.index[0]], stop = rows[rows.index[-1]]+1)
pg = g
- self.handle.flush()
+ self.table.flush()
# return the number of rows removed
return ln
@@ -1794,42 +2222,71 @@ class AppendableFrameTable(AppendableTable):
def is_transposed(self):
return self.index_axes[0].axis == 1
- def get_data_blocks(self, obj):
+ def get_object(self, obj):
""" these are written transposed """
if self.is_transposed:
obj = obj.T
- return obj._data.blocks
+ return obj
- def read(self, where=None):
+ def read(self, where=None, columns=None, **kwargs):
- if not self.read_axes(where): return None
+ if not self.read_axes(where=where, **kwargs): return None
index = self.index_axes[0].values
frames = []
for a in self.values_axes:
- columns = Index(a.values)
+ cols = Index(a.values)
if self.is_transposed:
values = a.cvalues
- index_ = columns
- columns_ = index
+ index_ = cols
+ cols_ = Index(index)
else:
values = a.cvalues.T
- index_ = index
- columns_ = columns
+ index_ = Index(index)
+ cols_ = cols
+
- block = make_block(values, columns_, columns_)
- mgr = BlockManager([ block ], [ columns_, index_ ])
+ # if we have a DataIndexableCol, its shape will only be 1 dim
+ if values.ndim == 1:
+ values = values.reshape(1,values.shape[0])
+
+ block = make_block(values, cols_, cols_)
+ mgr = BlockManager([ block ], [ cols_, index_ ])
frames.append(DataFrame(mgr))
- df = concat(frames, axis = 1, verify_integrity = True)
- # sort the indicies & reorder the columns
- for axis,labels in self.non_index_axes:
- df = df.reindex_axis(labels,axis=axis,copy=False)
+ if len(frames) == 1:
+ df = frames[0]
+ else:
+ df = concat(frames, axis = 1, verify_integrity = True)
# apply the selection filters & axis orderings
- df = self.process_axes(df)
+ df = self.process_axes(df, columns=columns)
+
+ return df
+
+class AppendableMultiFrameTable(AppendableFrameTable):
+ """ a frame with a multi-index """
+ table_type = 'appendable_multiframe'
+ obj_type = DataFrame
+ ndim = 2
+
+ @property
+ def table_type_short(self):
+ return 'appendable_multi'
+ def write(self, obj, data_columns = None, **kwargs):
+ if data_columns is None:
+ data_columns = []
+ for n in obj.index.names:
+ if n not in data_columns:
+ data_columns.insert(0,n)
+ self.levels = obj.index.names
+ return super(AppendableMultiFrameTable, self).write(obj = obj.reset_index(), data_columns = data_columns, **kwargs)
+
+ def read(self, *args, **kwargs):
+ df = super(AppendableMultiFrameTable, self).read(*args, **kwargs)
+ df.set_index(self.levels, inplace=True)
return df
class AppendablePanelTable(AppendableTable):
@@ -1838,11 +2295,11 @@ class AppendablePanelTable(AppendableTable):
ndim = 3
obj_type = Panel
- def get_data_blocks(self, obj):
+ def get_object(self, obj):
""" these are written transposed """
if self.is_transposed:
obj = obj.transpose(*self.data_orientation)
- return obj._data.blocks
+ return obj
@property
def is_transposed(self):
@@ -1856,7 +2313,8 @@ class AppendableNDimTable(AppendablePanelTable):
# table maps
_TABLE_MAP = {
- 'appendable_frame' : AppendableFrameTable,
+ 'appendable_frame' : AppendableFrameTable,
+ 'appendable_multiframe' : AppendableMultiFrameTable,
'appendable_panel' : AppendablePanelTable,
'appendable_ndim' : AppendableNDimTable,
'worm' : WORMTable,
@@ -1891,10 +2349,6 @@ def create_table(parent, group, typ = None, **kwargs):
return _TABLE_MAP.get(tt)(parent, group, **kwargs)
-def _itemsize_string_array(arr):
- """ return the maximum size of elements in a strnig array """
- return max([ str_len(arr[v]).max() for v in range(arr.shape[0]) ])
-
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
@@ -2075,8 +2529,8 @@ class Term(object):
"""
- _ops = ['<=','<','>=','>','!=','=']
- _search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
+ _ops = ['<=','<','>=','>','!=','==','=']
+ _search = re.compile("^\s*(?P<field>\w+)\s*(?P<op>%s)\s*(?P<value>.+)\s*$" % '|'.join(_ops))
def __init__(self, field, op = None, value = None, queryables = None):
self.field = None
@@ -2135,6 +2589,10 @@ def __init__(self, field, op = None, value = None, queryables = None):
if self.field is None or self.op is None or self.value is None:
raise Exception("Could not create this term [%s]" % str(self))
+ # = vs ==
+ if self.op == '==':
+ self.op = '='
+
# we have valid conditions
if self.op in ['>','>=','<','<=']:
if hasattr(self.value,'__iter__') and len(self.value) > 1:
@@ -2206,25 +2664,37 @@ def eval(self):
raise Exception("passing a filterable condition to a non-table indexer [%s]" % str(self))
def convert_value(self, v):
-
- #### a little hacky here, need to really figure out what we should convert ####x
- if self.field == 'index' or self.field == 'major_axis':
- if self.kind == 'datetime64' :
- return [lib.Timestamp(v).value, None]
- elif isinstance(v, datetime) or hasattr(v,'timetuple') or self.kind == 'date':
- return [time.mktime(v.timetuple()), None]
- elif self.kind == 'integer':
- v = int(float(v))
- return [v, v]
- elif self.kind == 'float':
- v = float(v)
- return [v, v]
+ """ convert the expression that is in the term to something that is accepted by pytables """
+
+ if self.kind == 'datetime64' :
+ return [lib.Timestamp(v).value, None]
+ elif isinstance(v, datetime) or hasattr(v,'timetuple') or self.kind == 'date':
+ return [time.mktime(v.timetuple()), None]
+ elif self.kind == 'integer':
+ v = int(float(v))
+ return [v, v]
+ elif self.kind == 'float':
+ v = float(v)
+ return [v, v]
elif not isinstance(v, basestring):
return [str(v), None]
# string quoting
return ["'" + v + "'", v]
+class Coordinates(object):
+ """ holds a returned coordinates list, useful to select the same rows from different tables
+
+ coordinates : holds the array of coordinates
+ group : the source group
+ where : the source where
+ """
+
+ def __init__(self, values, group, where, **kwargs):
+ self.values = values
+ self.group = group
+ self.where = where
+
class Selection(object):
"""
Carries out a selection operation on a tables.Table object.
@@ -2233,24 +2703,33 @@ class Selection(object):
----------
table : a Table object
where : list of Terms (or convertable to)
+ start, stop: indicies to start and/or stop selection
"""
- def __init__(self, table, where=None):
+ def __init__(self, table, where=None, start=None, stop=None, **kwargs):
self.table = table
self.where = where
+ self.start = start
+ self.stop = stop
self.condition = None
self.filter = None
- self.terms = self.generate(where)
-
- # create the numexpr & the filter
- if self.terms:
- conds = [ t.condition for t in self.terms if t.condition is not None ]
- if len(conds):
- self.condition = "(%s)" % ' & '.join(conds)
- self.filter = []
- for t in self.terms:
- if t.filter is not None:
- self.filter.append(t.filter)
+ self.terms = None
+ self.coordinates = None
+
+ if isinstance(where, Coordinates):
+ self.coordinates = where.values
+ else:
+ self.terms = self.generate(where)
+
+ # create the numexpr & the filter
+ if self.terms:
+ conds = [ t.condition for t in self.terms if t.condition is not None ]
+ if len(conds):
+ self.condition = "(%s)" % ' & '.join(conds)
+ self.filter = []
+ for t in self.terms:
+ if t.filter is not None:
+ self.filter.append(t.filter)
def generate(self, where):
""" where can be a : dict,list,tuple,string """
@@ -2259,9 +2738,12 @@ def generate(self, where):
if not isinstance(where, (list,tuple)):
where = [ where ]
else:
- # do we have all list/tuple
+
+ # make this a list of we think that we only have a sigle term & no operands inside any terms
if not any([ isinstance(w, (list,tuple,Term)) for w in where ]):
- where = [ where ]
+
+ if not any([ isinstance(w,basestring) and Term._search.match(w) for w in where ]):
+ where = [ where ]
queryables = self.table.queryables()
return [ Term(c, queryables = queryables) for c in where ]
@@ -2271,15 +2753,19 @@ def select(self):
generate the selection
"""
if self.condition is not None:
- return self.table.table.readWhere(self.condition)
- else:
- return self.table.table.read()
+ return self.table.table.readWhere(self.condition, start=self.start, stop=self.stop)
+ elif self.coordinates is not None:
+ return self.table.table.readCoordinates(self.coordinates)
+ return self.table.table.read(start=self.start,stop=self.stop)
def select_coords(self):
"""
generate the selection
"""
- return self.table.table.getWhereList(self.condition, sort = True)
+ if self.condition is None:
+ return np.arange(self.table.nrows)
+
+ return self.table.table.getWhereList(self.condition, start=self.start, stop=self.stop, sort = True)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/legacy_0.10.h5 b/pandas/io/tests/legacy_0.10.h5
new file mode 100644
index 0000000000000..b1439ef16361a
Binary files /dev/null and b/pandas/io/tests/legacy_0.10.h5 differ
diff --git a/pandas/io/tests/legacy_table.h5 b/pandas/io/tests/legacy_table.h5
index 1c90382d9125c..5f4089efc15c3 100644
Binary files a/pandas/io/tests/legacy_table.h5 and b/pandas/io/tests/legacy_table.h5 differ
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 2b0d1cda89392..6f11ebdaaa7b3 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -4,7 +4,7 @@
import sys
import warnings
-from datetime import datetime
+import datetime
import numpy as np
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
@@ -93,16 +93,17 @@ def test_versioning(self):
self.store.remove('df1')
self.store.append('df1', df[:10])
self.store.append('df1', df[10:])
- self.assert_(self.store.root.a._v_attrs.pandas_version == '0.10')
- self.assert_(self.store.root.b._v_attrs.pandas_version == '0.10')
- self.assert_(self.store.root.df1._v_attrs.pandas_version == '0.10')
+ self.assert_(self.store.root.a._v_attrs.pandas_version == '0.10.1')
+ self.assert_(self.store.root.b._v_attrs.pandas_version == '0.10.1')
+ self.assert_(self.store.root.df1._v_attrs.pandas_version == '0.10.1')
# write a file and wipe its versioning
self.store.remove('df2')
self.store.append('df2', df)
+
+ # this is an error because its table_type is appendable, but no version info
self.store.get_node('df2')._v_attrs.pandas_version = None
- self.store.select('df2')
- self.store.select('df2', [ Term('index','>',df.index[2]) ])
+ self.assertRaises(Exception, self.store.select,'df2')
def test_meta(self):
raise nose.SkipTest('no meta')
@@ -202,12 +203,12 @@ def test_put_string_index(self):
def test_put_compression(self):
df = tm.makeTimeDataFrame()
- self.store.put('c', df, table=True, compression='zlib')
+ self.store.put('c', df, table=True, complib='zlib')
tm.assert_frame_equal(self.store['c'], df)
# can't compress if table=False
self.assertRaises(ValueError, self.store.put, 'b', df,
- table=False, compression='zlib')
+ table=False, complib='zlib')
def test_put_compression_blosc(self):
tm.skip_if_no_package('tables', '2.2', app='blosc support')
@@ -215,9 +216,9 @@ def test_put_compression_blosc(self):
# can't compress if table=False
self.assertRaises(ValueError, self.store.put, 'b', df,
- table=False, compression='blosc')
+ table=False, complib='blosc')
- self.store.put('c', df, table=True, compression='blosc')
+ self.store.put('c', df, table=True, complib='blosc')
tm.assert_frame_equal(self.store['c'], df)
def test_put_integer(self):
@@ -287,6 +288,13 @@ def test_append(self):
self.store.append('wp1', wp_append2)
tm.assert_panel_equal(self.store['wp1'], wp)
+ # dtype issues - mizxed type in a single object column
+ df = DataFrame(data=[[1,2],[0,1],[1,2],[0,0]])
+ df['mixed_column'] = 'testing'
+ df.ix[2,'mixed_column'] = np.nan
+ self.store.remove('df')
+ self.store.append('df', df)
+ tm.assert_frame_equal(self.store['df'],df)
def test_append_frame_column_oriented(self):
@@ -373,11 +381,15 @@ def test_append_with_strings(self):
wp = tm.makePanel()
wp2 = wp.rename_axis(dict([ (x,"%s_extra" % x) for x in wp.minor_axis ]), axis = 2)
+ def check_col(key,name,size):
+ self.assert_(getattr(self.store.get_table(key).table.description,name).itemsize == size)
+
self.store.append('s1', wp, min_itemsize = 20)
self.store.append('s1', wp2)
expected = concat([ wp, wp2], axis = 2)
expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
tm.assert_panel_equal(self.store['s1'], expected)
+ check_col('s1','minor_axis',20)
# test dict format
self.store.append('s2', wp, min_itemsize = { 'minor_axis' : 20 })
@@ -385,6 +397,7 @@ def test_append_with_strings(self):
expected = concat([ wp, wp2], axis = 2)
expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
tm.assert_panel_equal(self.store['s2'], expected)
+ check_col('s2','minor_axis',20)
# apply the wrong field (similar to #1)
self.store.append('s3', wp, min_itemsize = { 'major_axis' : 20 })
@@ -396,55 +409,189 @@ def test_append_with_strings(self):
# avoid truncation on elements
df = DataFrame([[123,'asdqwerty'], [345,'dggnhebbsdfbdfb']])
- self.store.append('df_big',df, min_itemsize = { 'values' : 1024 })
+ self.store.append('df_big',df)
tm.assert_frame_equal(self.store.select('df_big'), df)
+ check_col('df_big','values_block_1',15)
# appending smaller string ok
df2 = DataFrame([[124,'asdqy'], [346,'dggnhefbdfb']])
self.store.append('df_big',df2)
expected = concat([ df, df2 ])
tm.assert_frame_equal(self.store.select('df_big'), expected)
+ check_col('df_big','values_block_1',15)
# avoid truncation on elements
df = DataFrame([[123,'asdqwerty'], [345,'dggnhebbsdfbdfb']])
- self.store.append('df_big2',df, min_itemsize = { 'values' : 10 })
+ self.store.append('df_big2',df, min_itemsize = { 'values' : 50 })
tm.assert_frame_equal(self.store.select('df_big2'), df)
+ check_col('df_big2','values_block_1',50)
# bigger string on next append
- self.store.append('df_new',df, min_itemsize = { 'values' : 16 })
+ self.store.append('df_new',df)
df_new = DataFrame([[124,'abcdefqhij'], [346, 'abcdefghijklmnopqrtsuvwxyz']])
self.assertRaises(Exception, self.store.append, 'df_new',df_new)
+ # with nans
+ self.store.remove('df')
+ df = tm.makeTimeDataFrame()
+ df['string'] = 'foo'
+ df.ix[1:4,'string'] = np.nan
+ df['string2'] = 'bar'
+ df.ix[4:8,'string2'] = np.nan
+ df['string3'] = 'bah'
+ df.ix[1:,'string3'] = np.nan
+ self.store.append('df',df)
+ result = self.store.select('df')
+ tm.assert_frame_equal(result,df)
+
+
+ def test_append_with_data_columns(self):
+
+ df = tm.makeTimeDataFrame()
+ self.store.remove('df')
+ self.store.append('df', df[:2], data_columns = ['B'])
+ self.store.append('df', df[2:])
+ tm.assert_frame_equal(self.store['df'], df)
+
+ # check that we have indicies created
+ assert(self.store.handle.root.df.table.cols.index.is_indexed == True)
+ assert(self.store.handle.root.df.table.cols.B.is_indexed == True)
+
+ # data column searching
+ result = self.store.select('df', [ Term('B>0') ])
+ expected = df[df.B>0]
+ tm.assert_frame_equal(result, expected)
+
+ # data column searching (with an indexable and a data_columns)
+ result = self.store.select('df', [ Term('B>0'), Term('index','>',df.index[3]) ])
+ df_new = df.reindex(index=df.index[4:])
+ expected = df_new[df_new.B>0]
+ tm.assert_frame_equal(result, expected)
+
+ # data column selection with a string data_column
+ df_new = df.copy()
+ df_new['string'] = 'foo'
+ df_new['string'][1:4] = np.nan
+ df_new['string'][5:6] = 'bar'
+ self.store.remove('df')
+ self.store.append('df', df_new, data_columns = ['string'])
+ result = self.store.select('df', [ Term('string', '=', 'foo') ])
+ expected = df_new[df_new.string == 'foo']
+ tm.assert_frame_equal(result, expected)
+
+ # using min_itemsize and a data column
+ def check_col(key,name,size):
+ self.assert_(getattr(self.store.get_table(key).table.description,name).itemsize == size)
+
+ self.store.remove('df')
+ self.store.append('df', df_new, data_columns = ['string'], min_itemsize = { 'string' : 30 })
+ check_col('df','string',30)
+ self.store.remove('df')
+ self.store.append('df', df_new, data_columns = ['string'], min_itemsize = 30)
+ check_col('df','string',30)
+ self.store.remove('df')
+ self.store.append('df', df_new, data_columns = ['string'], min_itemsize = { 'values' : 30 })
+ check_col('df','string',30)
+
+ df_new['string2'] = 'foobarbah'
+ df_new['string_block1'] = 'foobarbah1'
+ df_new['string_block2'] = 'foobarbah2'
+ self.store.remove('df')
+ self.store.append('df', df_new, data_columns = ['string','string2'], min_itemsize = { 'string' : 30, 'string2' : 40, 'values' : 50 })
+ check_col('df','string',30)
+ check_col('df','string2',40)
+ check_col('df','values_block_1',50)
+
+ # multiple data columns
+ df_new = df.copy()
+ df_new['string'] = 'foo'
+ df_new['string'][1:4] = np.nan
+ df_new['string'][5:6] = 'bar'
+ df_new['string2'] = 'foo'
+ df_new['string2'][2:5] = np.nan
+ df_new['string2'][7:8] = 'bar'
+ self.store.remove('df')
+ self.store.append('df', df_new, data_columns = ['A','B','string','string2'])
+ result = self.store.select('df', [ Term('string', '=', 'foo'), Term('string2=foo'), Term('A>0'), Term('B<0') ])
+ expected = df_new[(df_new.string == 'foo') & (df_new.string2 == 'foo') & (df_new.A > 0) & (df_new.B < 0)]
+ tm.assert_frame_equal(result, expected)
+
+ # yield an empty frame
+ result = self.store.select('df', [ Term('string', '=', 'foo'), Term('string2=bar'), Term('A>0'), Term('B<0') ])
+ expected = df_new[(df_new.string == 'foo') & (df_new.string2 == 'bar') & (df_new.A > 0) & (df_new.B < 0)]
+ tm.assert_frame_equal(result, expected)
+
+ # doc example
+ df_dc = df.copy()
+ df_dc['string'] = 'foo'
+ df_dc.ix[4:6,'string'] = np.nan
+ df_dc.ix[7:9,'string'] = 'bar'
+ df_dc['string2'] = 'cool'
+ df_dc['datetime'] = Timestamp('20010102')
+ df_dc = df_dc.convert_objects()
+ df_dc.ix[3:5,['A','B','datetime']] = np.nan
+
+ self.store.remove('df_dc')
+ self.store.append('df_dc', df_dc, data_columns = ['B','C','string','string2','datetime'])
+ result = self.store.select('df_dc',[ Term('B>0') ])
+
+ expected = df_dc[df_dc.B > 0]
+ tm.assert_frame_equal(result, expected)
+
+ result = self.store.select('df_dc',[ 'B > 0', 'C > 0', 'string == foo' ])
+ expected = df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == 'foo')]
+ tm.assert_frame_equal(result, expected)
+
def test_create_table_index(self):
+
+ def col(t,column):
+ return getattr(self.store.get_table(t).table.cols,column)
+
+ # index=False
wp = tm.makePanel()
- self.store.append('p5', wp)
- self.store.create_table_index('p5')
+ self.store.append('p5', wp, index=False)
+ self.store.create_table_index('p5', columns = ['major_axis'])
+ assert(col('p5','major_axis').is_indexed == True)
+ assert(col('p5','minor_axis').is_indexed == False)
- assert(self.store.handle.root.p5.table.cols.major_axis.is_indexed == True)
- assert(self.store.handle.root.p5.table.cols.minor_axis.is_indexed == False)
+ # index=True
+ self.store.append('p5i', wp, index=True)
+ assert(col('p5i','major_axis').is_indexed == True)
+ assert(col('p5i','minor_axis').is_indexed == True)
# default optlevels
- assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 6)
- assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'medium')
+ self.store.get_table('p5').create_index()
+ assert(col('p5','major_axis').index.optlevel == 6)
+ assert(col('p5','minor_axis').index.kind == 'medium')
# let's change the indexing scheme
self.store.create_table_index('p5')
- assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 6)
- assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'medium')
+ assert(col('p5','major_axis').index.optlevel == 6)
+ assert(col('p5','minor_axis').index.kind == 'medium')
self.store.create_table_index('p5', optlevel=9)
- assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 9)
- assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'medium')
+ assert(col('p5','major_axis').index.optlevel == 9)
+ assert(col('p5','minor_axis').index.kind == 'medium')
self.store.create_table_index('p5', kind='full')
- assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 9)
- assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'full')
+ assert(col('p5','major_axis').index.optlevel == 9)
+ assert(col('p5','minor_axis').index.kind == 'full')
self.store.create_table_index('p5', optlevel=1, kind='light')
- assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 1)
- assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'light')
-
+ assert(col('p5','major_axis').index.optlevel == 1)
+ assert(col('p5','minor_axis').index.kind == 'light')
+
+ # data columns
df = tm.makeTimeDataFrame()
- self.store.append('f', df[:10])
- self.store.append('f', df[10:])
- self.store.create_table_index('f')
+ df['string'] = 'foo'
+ df['string2'] = 'bar'
+ self.store.append('f', df, data_columns=['string','string2'])
+ assert(col('f','index').is_indexed == True)
+ assert(col('f','string').is_indexed == True)
+ assert(col('f','string2').is_indexed == True)
+
+ # specify index=columns
+ self.store.append('f2', df, index=['string'], data_columns=['string','string2'])
+ assert(col('f2','index').is_indexed == False)
+ assert(col('f2','string').is_indexed == True)
+ assert(col('f2','string2').is_indexed == False)
# try to index a non-table
self.store.put('f2', df)
@@ -474,24 +621,86 @@ def test_create_table_index(self):
tables.__version__ = original
- def test_big_table(self):
- raise nose.SkipTest('no big table')
+ def test_big_table_frame(self):
+ raise nose.SkipTest('no big table frame')
# create and write a big table
- wp = Panel(np.random.randn(20, 1000, 1000), items= [ 'Item%s' % i for i in xrange(20) ],
- major_axis=date_range('1/1/2000', periods=1000), minor_axis = [ 'E%s' % i for i in xrange(1000) ])
+ df = DataFrame(np.random.randn(2000*100, 100), index = range(2000*100), columns = [ 'E%03d' % i for i in xrange(100) ])
+ for x in range(20):
+ df['String%03d' % x] = 'string%03d' % x
+
+ import time
+ x = time.time()
+ try:
+ store = HDFStore(self.scratchpath)
+ store.append('df',df)
+ rows = store.root.df.table.nrows
+ recons = store.select('df')
+ finally:
+ store.close()
+ os.remove(self.scratchpath)
+
+ print "\nbig_table frame [%s] -> %5.2f" % (rows,time.time()-x)
+
+
+ def test_big_table2_frame(self):
+ # this is a really big table: 2.5m rows x 300 float columns, 20 string columns
+ raise nose.SkipTest('no big table2 frame')
+
+ # create and write a big table
+ print "\nbig_table2 start"
+ import time
+ start_time = time.time()
+ df = DataFrame(np.random.randn(2.5*1000*1000, 300), index = range(int(2.5*1000*1000)), columns = [ 'E%03d' % i for i in xrange(300) ])
+ for x in range(20):
+ df['String%03d' % x] = 'string%03d' % x
+
+ print "\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f" % (len(df.index),time.time()-start_time)
+ fn = 'big_table2.h5'
+
+ try:
+
+ def f(chunksize):
+ store = HDFStore(fn,mode = 'w')
+ store.append('df',df,chunksize=chunksize)
+ r = store.root.df.table.nrows
+ store.close()
+ return r
+
+ for c in [ 10000, 50000, 100000, 250000 ]:
+ start_time = time.time()
+ print "big_table2 frame [chunk->%s]" % c
+ rows = f(c)
+ print "big_table2 frame [rows->%s,chunk->%s] -> %5.2f" % (rows,c,time.time()-start_time)
+
+ finally:
+ os.remove(fn)
+
+ def test_big_table_panel(self):
+ raise nose.SkipTest('no big table panel')
+
+ # create and write a big table
+ wp = Panel(np.random.randn(20, 1000, 1000), items= [ 'Item%03d' % i for i in xrange(20) ],
+ major_axis=date_range('1/1/2000', periods=1000), minor_axis = [ 'E%03d' % i for i in xrange(1000) ])
wp.ix[:,100:200,300:400] = np.nan
+ for x in range(100):
+ wp['String%03d'] = 'string%03d' % x
+
+ import time
+ x = time.time()
try:
store = HDFStore(self.scratchpath)
- store._debug_memory = True
- store.append('wp',wp)
+ store.prof_append('wp',wp)
+ rows = store.root.wp.table.nrows
recons = store.select('wp')
finally:
store.close()
os.remove(self.scratchpath)
+ print "\nbig_table panel [%s] -> %5.2f" % (rows,time.time()-x)
+
def test_append_diff_item_order(self):
raise nose.SkipTest('append diff item order')
@@ -503,6 +712,30 @@ def test_append_diff_item_order(self):
self.assertRaises(Exception, self.store.put, 'panel', wp2,
append=True)
+ def test_append_hierarchical(self):
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ['one', 'two', 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['foo', 'bar'])
+ df = DataFrame(np.random.randn(10, 3), index=index,
+ columns=['A', 'B', 'C'])
+
+ self.store.append('mi',df)
+ result = self.store.select('mi')
+ tm.assert_frame_equal(result, df)
+
+ def test_append_misc(self):
+
+ df = tm.makeDataFrame()
+ self.store.append('df',df,chunksize=1)
+ result = self.store.select('df')
+ tm.assert_frame_equal(result, df)
+
+ self.store.append('df1',df,expectedrows=10)
+ result = self.store.select('df1')
+ tm.assert_frame_equal(result, df)
+
def test_table_index_incompatible_dtypes(self):
df1 = DataFrame({'a': [1, 2, 3]})
df2 = DataFrame({'a': [4, 5, 6]},
@@ -527,51 +760,66 @@ def test_table_values_dtypes_roundtrip(self):
def test_table_mixed_dtypes(self):
# frame
- def _make_one_df():
- df = tm.makeDataFrame()
- df['obj1'] = 'foo'
- df['obj2'] = 'bar'
- df['bool1'] = df['A'] > 0
- df['bool2'] = df['B'] > 0
- df['bool3'] = True
- df['int1'] = 1
- df['int2'] = 2
- return df.consolidate()
-
- df1 = _make_one_df()
-
- self.store.append('df1_mixed', df1)
- tm.assert_frame_equal(self.store.select('df1_mixed'), df1)
+ df = tm.makeDataFrame()
+ df['obj1'] = 'foo'
+ df['obj2'] = 'bar'
+ df['bool1'] = df['A'] > 0
+ df['bool2'] = df['B'] > 0
+ df['bool3'] = True
+ df['int1'] = 1
+ df['int2'] = 2
+ df['timestamp1'] = Timestamp('20010102')
+ df['timestamp2'] = Timestamp('20010103')
+ df['datetime1'] = datetime.datetime(2001,1,2,0,0)
+ df['datetime2'] = datetime.datetime(2001,1,3,0,0)
+ df.ix[3:6,['obj1']] = np.nan
+ df = df.consolidate().convert_objects()
+
+ self.store.append('df1_mixed', df)
+ tm.assert_frame_equal(self.store.select('df1_mixed'), df)
# panel
- def _make_one_panel():
- wp = tm.makePanel()
- wp['obj1'] = 'foo'
- wp['obj2'] = 'bar'
- wp['bool1'] = wp['ItemA'] > 0
- wp['bool2'] = wp['ItemB'] > 0
- wp['int1'] = 1
- wp['int2'] = 2
- return wp.consolidate()
- p1 = _make_one_panel()
-
- self.store.append('p1_mixed', p1)
- tm.assert_panel_equal(self.store.select('p1_mixed'), p1)
+ wp = tm.makePanel()
+ wp['obj1'] = 'foo'
+ wp['obj2'] = 'bar'
+ wp['bool1'] = wp['ItemA'] > 0
+ wp['bool2'] = wp['ItemB'] > 0
+ wp['int1'] = 1
+ wp['int2'] = 2
+ wp = wp.consolidate()
+
+ self.store.append('p1_mixed', wp)
+ tm.assert_panel_equal(self.store.select('p1_mixed'), wp)
# ndim
- def _make_one_p4d():
- wp = tm.makePanel4D()
- wp['obj1'] = 'foo'
- wp['obj2'] = 'bar'
- wp['bool1'] = wp['l1'] > 0
- wp['bool2'] = wp['l2'] > 0
- wp['int1'] = 1
- wp['int2'] = 2
- return wp.consolidate()
-
- p4d = _make_one_p4d()
- self.store.append('p4d_mixed', p4d)
- tm.assert_panel4d_equal(self.store.select('p4d_mixed'), p4d)
+ wp = tm.makePanel4D()
+ wp['obj1'] = 'foo'
+ wp['obj2'] = 'bar'
+ wp['bool1'] = wp['l1'] > 0
+ wp['bool2'] = wp['l2'] > 0
+ wp['int1'] = 1
+ wp['int2'] = 2
+ wp = wp.consolidate()
+
+ self.store.append('p4d_mixed', wp)
+ tm.assert_panel4d_equal(self.store.select('p4d_mixed'), wp)
+
+ def test_unimplemented_dtypes_table_columns(self):
+ #### currently not supported dtypes ####
+ for n,f in [ ('unicode',u'\u03c3'), ('date',datetime.date(2001,1,2)) ]:
+ df = tm.makeDataFrame()
+ df[n] = f
+ self.assertRaises(NotImplementedError, self.store.append, 'df1_%s' % n, df)
+
+ # frame
+ df = tm.makeDataFrame()
+ df['obj1'] = 'foo'
+ df['obj2'] = 'bar'
+ df['datetime1'] = datetime.date(2001,1,2)
+ df = df.consolidate().convert_objects()
+
+ # this fails because we have a date in the object block......
+ self.assertRaises(Exception, self.store.append, 'df_unimplemented', df)
def test_remove(self):
ts = tm.makeTimeSeries()
@@ -737,10 +985,10 @@ def test_terms(self):
('major_axis', '20121114'),
('major_axis', '>', '20121114'),
(('major_axis', ['20121114','20121114']),),
- ('major_axis', datetime(2012,11,14)),
- 'major_axis>20121114',
- 'major_axis>20121114',
- 'major_axis>20121114',
+ ('major_axis', datetime.datetime(2012,11,14)),
+ 'major_axis> 20121114',
+ 'major_axis >20121114',
+ 'major_axis > 20121114',
(('minor_axis', ['A','B']),),
(('minor_axis', ['A','B']),),
((('minor_axis', ['A','B']),),),
@@ -844,14 +1092,13 @@ def test_index_types(self):
ser = Series(values, [0, 'y'])
self._check_roundtrip(ser, func)
- ser = Series(values, [datetime.today(), 0])
+ ser = Series(values, [datetime.datetime.today(), 0])
self._check_roundtrip(ser, func)
ser = Series(values, ['y', 0])
self._check_roundtrip(ser, func)
- from datetime import date
- ser = Series(values, [date.today(), 'a'])
+ ser = Series(values, [datetime.date.today(), 'a'])
self._check_roundtrip(ser, func)
ser = Series(values, [1.23, 'b'])
@@ -863,7 +1110,7 @@ def test_index_types(self):
ser = Series(values, [1, 5])
self._check_roundtrip(ser, func)
- ser = Series(values, [datetime(2012, 1, 1), datetime(2012, 1, 2)])
+ ser = Series(values, [datetime.datetime(2012, 1, 1), datetime.datetime(2012, 1, 2)])
self._check_roundtrip(ser, func)
def test_timeseries_preepoch(self):
@@ -1104,6 +1351,33 @@ def test_select(self):
#self.assertRaises(Exception, self.store.select,
# 'wp2', ('column', ['A', 'D']))
+ # select with columns=
+ df = tm.makeTimeDataFrame()
+ self.store.remove('df')
+ self.store.append('df',df)
+ result = self.store.select('df', columns = ['A','B'])
+ expected = df.reindex(columns = ['A','B'])
+ tm.assert_frame_equal(expected, result)
+
+ # equivalentsly
+ result = self.store.select('df', [ ('columns', ['A','B']) ])
+ expected = df.reindex(columns = ['A','B'])
+ tm.assert_frame_equal(expected, result)
+
+ # with a data column
+ self.store.remove('df')
+ self.store.append('df',df, data_columns = ['A'])
+ result = self.store.select('df', [ 'A > 0' ], columns = ['A','B'])
+ expected = df[df.A > 0].reindex(columns = ['A','B'])
+ tm.assert_frame_equal(expected, result)
+
+ # with a data column, but different columns
+ self.store.remove('df')
+ self.store.append('df',df, data_columns = ['A'])
+ result = self.store.select('df', [ 'A > 0' ], columns = ['C','D'])
+ expected = df[df.A > 0].reindex(columns = ['C','D'])
+ tm.assert_frame_equal(expected, result)
+
def test_panel_select(self):
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
@@ -1148,11 +1422,161 @@ def test_frame_select(self):
self.store.append('df_float', df)
self.store.select('df_float', [ Term("index<10.0"), Term("columns", "=", ["A"]) ])
+ # invalid terms
+ df = tm.makeTimeDataFrame()
+ self.store.append('df_time', df)
+ self.assertRaises(Exception, self.store.select, 'df_time', [ Term("index>0") ])
+
# can't select if not written as table
#self.store['frame'] = df
#self.assertRaises(Exception, self.store.select,
# 'frame', [crit1, crit2])
+ def test_unique(self):
+ df = tm.makeTimeDataFrame()
+
+
+ def check(x, y):
+ self.assert_((np.unique(x) == np.unique(y)).all() == True)
+
+ self.store.remove('df')
+ self.store.append('df', df)
+
+ # error
+ self.assertRaises(KeyError, self.store.unique, 'df','foo')
+
+ # valid
+ result = self.store.unique('df','index')
+ check(result.values,df.index.values)
+
+ # not a data indexable column
+ self.assertRaises(ValueError, self.store.unique, 'df','values_block_0')
+
+ # a data column
+ df2 = df.copy()
+ df2['string'] = 'foo'
+ self.store.append('df2',df2,data_columns = ['string'])
+ result = self.store.unique('df2','string')
+ check(result.values,df2['string'].unique())
+
+ # a data column with NaNs, result excludes the NaNs
+ df3 = df.copy()
+ df3['string'] = 'foo'
+ df3.ix[4:6,'string'] = np.nan
+ self.store.append('df3',df3,data_columns = ['string'])
+ result = self.store.unique('df3','string')
+ check(result.values,df3['string'].valid().unique())
+
+ def test_coordinates(self):
+ df = tm.makeTimeDataFrame()
+
+ self.store.remove('df')
+ self.store.append('df', df)
+
+ # all
+ c = self.store.select_as_coordinates('df')
+ assert((c.values == np.arange(len(df.index))).all() == True)
+
+ # get coordinates back & test vs frame
+ self.store.remove('df')
+
+ df = DataFrame(dict(A = range(5), B = range(5)))
+ self.store.append('df', df)
+ c = self.store.select_as_coordinates('df',[ 'index<3' ])
+ assert((c.values == np.arange(3)).all() == True)
+ result = self.store.select('df', where = c)
+ expected = df.ix[0:2,:]
+ tm.assert_frame_equal(result,expected)
+
+ c = self.store.select_as_coordinates('df', [ 'index>=3', 'index<=4' ])
+ assert((c.values == np.arange(2)+3).all() == True)
+ result = self.store.select('df', where = c)
+ expected = df.ix[3:4,:]
+ tm.assert_frame_equal(result,expected)
+
+ # multiple tables
+ self.store.remove('df1')
+ self.store.remove('df2')
+ df1 = tm.makeTimeDataFrame()
+ df2 = tm.makeTimeDataFrame().rename(columns = lambda x: "%s_2" % x)
+ self.store.append('df1',df1, data_columns = ['A','B'])
+ self.store.append('df2',df2)
+
+ c = self.store.select_as_coordinates('df1', [ 'A>0','B>0' ])
+ df1_result = self.store.select('df1',c)
+ df2_result = self.store.select('df2',c)
+ result = concat([ df1_result, df2_result ], axis=1)
+
+ expected = concat([ df1, df2 ], axis=1)
+ expected = expected[(expected.A > 0) & (expected.B > 0)]
+ tm.assert_frame_equal(result, expected)
+
+ def test_append_to_multiple(self):
+ df1 = tm.makeTimeDataFrame()
+ df2 = tm.makeTimeDataFrame().rename(columns = lambda x: "%s_2" % x)
+ df2['foo'] = 'bar'
+ df = concat([ df1, df2 ], axis=1)
+
+ # exceptions
+ self.assertRaises(Exception, self.store.append_to_multiple, { 'df1' : ['A','B'], 'df2' : None }, df, selector = 'df3')
+ self.assertRaises(Exception, self.store.append_to_multiple, { 'df1' : None, 'df2' : None }, df, selector = 'df3')
+ self.assertRaises(Exception, self.store.append_to_multiple, 'df1', df, 'df1')
+
+ # regular operation
+ self.store.append_to_multiple({ 'df1' : ['A','B'], 'df2' : None }, df, selector = 'df1')
+ result = self.store.select_as_multiple(['df1','df2'], where = [ 'A>0','B>0' ], selector = 'df1')
+ expected = df[(df.A > 0) & (df.B > 0)]
+ tm.assert_frame_equal(result, expected)
+
+
+ def test_select_as_multiple(self):
+ df1 = tm.makeTimeDataFrame()
+ df2 = tm.makeTimeDataFrame().rename(columns = lambda x: "%s_2" % x)
+ df2['foo'] = 'bar'
+ self.store.append('df1',df1, data_columns = ['A','B'])
+ self.store.append('df2',df2)
+
+ # exceptions
+ self.assertRaises(Exception, self.store.select_as_multiple, None, where = [ 'A>0','B>0' ], selector = 'df1')
+ self.assertRaises(Exception, self.store.select_as_multiple, [ None ], where = [ 'A>0','B>0' ], selector = 'df1')
+
+ # default select
+ result = self.store.select('df1', ['A>0','B>0'])
+ expected = self.store.select_as_multiple([ 'df1' ], where = [ 'A>0','B>0' ], selector = 'df1')
+ tm.assert_frame_equal(result, expected)
+ expected = self.store.select_as_multiple( 'df1' , where = [ 'A>0','B>0' ], selector = 'df1')
+ tm.assert_frame_equal(result, expected)
+
+ # multiple
+ result = self.store.select_as_multiple(['df1','df2'], where = [ 'A>0','B>0' ], selector = 'df1')
+ expected = concat([ df1, df2 ], axis=1)
+ expected = expected[(expected.A > 0) & (expected.B > 0)]
+ tm.assert_frame_equal(result, expected)
+
+ # multiple (diff selector)
+ result = self.store.select_as_multiple(['df1','df2'], where = [ Term('index', '>', df2.index[4]) ], selector = 'df2')
+ expected = concat([ df1, df2 ], axis=1)
+ expected = expected[5:]
+ tm.assert_frame_equal(result, expected)
+
+ # test excpection for diff rows
+ self.store.append('df3',tm.makeTimeDataFrame(nper=50))
+ self.assertRaises(Exception, self.store.select_as_multiple, ['df1','df3'], where = [ 'A>0','B>0' ], selector = 'df1')
+
+ def test_start_stop(self):
+
+ df = DataFrame(dict(A = np.random.rand(20), B = np.random.rand(20)))
+ self.store.append('df', df)
+
+ result = self.store.select('df', [ Term("columns", "=", ["A"]) ], start=0, stop=5)
+ expected = df.ix[0:4,['A']]
+ tm.assert_frame_equal(result, expected)
+
+ # out of range
+ result = self.store.select('df', [ Term("columns", "=", ["A"]) ], start=30, stop=40)
+ assert(len(result) == 0)
+ assert(type(result) == DataFrame)
+
def test_select_filter_corner(self):
df = DataFrame(np.random.randn(50, 100))
df.index = ['%.3d' % c for c in df.index]
@@ -1230,14 +1654,25 @@ def test_legacy_table_read(self):
# force the frame
store.select('df2', typ = 'legacy_frame')
- # old version (this still throws an exception though)
+ self.assertRaises(Exception, store.select, 'wp1', Term('minor_axis','=','B'))
+
+ # old version warning
import warnings
warnings.filterwarnings('ignore', category=IncompatibilityWarning)
- self.assertRaises(Exception, store.select, 'wp1', Term('minor_axis','=','B'))
+ df2 = store.select('df2')
+ store.select('df2', Term('index', '>', df2.index[2]))
warnings.filterwarnings('always', category=IncompatibilityWarning)
store.close()
+ def test_legacy_0_10_read(self):
+ # legacy from 0.10
+ pth = curpath()
+ store = HDFStore(os.path.join(pth, 'legacy_0.10.h5'), 'r')
+ for k in store.keys():
+ store.select(k)
+ store.close()
+
def test_legacy_table_write(self):
# legacy table types
pth = curpath()
@@ -1252,7 +1687,7 @@ def test_legacy_table_write(self):
store.close()
def test_store_datetime_fractional_secs(self):
- dt = datetime(2012, 1, 2, 3, 4, 5, 123456)
+ dt = datetime.datetime(2012, 1, 2, 3, 4, 5, 123456)
series = Series([0], [dt])
self.store['a'] = series
self.assertEquals(self.store['a'].index[0], dt)
@@ -1307,13 +1742,13 @@ def test_store_datetime_mixed(self):
df['d'] = ts.index[:3]
self._check_roundtrip(df, tm.assert_frame_equal)
- def test_cant_write_multiindex_table(self):
- # for now, #1848
- df = DataFrame(np.random.randn(10, 4),
- index=[np.arange(5).repeat(2),
- np.tile(np.arange(2), 5)])
+ #def test_cant_write_multiindex_table(self):
+ # # for now, #1848
+ # df = DataFrame(np.random.randn(10, 4),
+ # index=[np.arange(5).repeat(2),
+ # np.tile(np.arange(2), 5)])
- self.assertRaises(Exception, self.store.put, 'foo', df, table=True)
+ # self.assertRaises(Exception, self.store.put, 'foo', df, table=True)
def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index d904d86f183c3..39911c88e5686 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -15,7 +15,9 @@ from cpython cimport (PyDict_New, PyDict_GetItem, PyDict_SetItem,
PyList_Check, PyFloat_Check,
PyString_Check,
PyTuple_SetItem,
- PyTuple_New)
+ PyTuple_New,
+ PyObject_SetAttrString)
+
cimport cpython
isnan = np.isnan
@@ -740,33 +742,61 @@ def clean_index_list(list obj):
return maybe_convert_objects(converted), 0
-from cpython cimport (PyDict_New, PyDict_GetItem, PyDict_SetItem,
- PyDict_Contains, PyDict_Keys,
- Py_INCREF, PyTuple_SET_ITEM,
- PyTuple_SetItem,
- PyTuple_New,
- PyObject_SetAttrString)
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def max_len_string_array(ndarray[object, ndim=1] arr):
+ """ return the maximum size of elements in a 1-dim string array """
+ cdef:
+ int i, m, l
+ length = arr.shape[0]
+
+ m = 0
+ for i from 0 <= i < length:
+ l = len(arr[i])
+
+ if l > m:
+ m = l
+
+ return m
@cython.boundscheck(False)
@cython.wraparound(False)
-def create_hdf_rows_2d(ndarray indexer0, ndarray[np.uint8_t, ndim=1] mask,
- list values):
+def array_replace_from_nan_rep(ndarray[object, ndim=1] arr, object nan_rep, object replace = None):
+ """ replace the values in the array with replacement if they are nan_rep; return the same array """
+
+ cdef int length = arr.shape[0]
+ cdef int i = 0
+ if replace is None:
+ replace = np.nan
+
+ for i from 0 <= i < length:
+ if arr[i] == nan_rep:
+ arr[i] = replace
+
+ return arr
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def create_hdf_rows_2d(ndarray indexer0,
+ ndarray[np.uint8_t, ndim=1] mask,
+ ndarray[np.uint8_t, ndim=1] searchable,
+ list values):
""" return a list of objects ready to be converted to rec-array format """
cdef:
int i, b, n_indexer0, n_blocks, tup_size
- ndarray v
list l
- object tup, val
+ object tup, val, v
n_indexer0 = indexer0.shape[0]
n_blocks = len(values)
tup_size = n_blocks+1
l = []
+
for i from 0 <= i < n_indexer0:
if not mask[i]:
-
+
tup = PyTuple_New(tup_size)
val = indexer0[i]
PyTuple_SET_ITEM(tup, 0, val)
@@ -774,7 +804,9 @@ def create_hdf_rows_2d(ndarray indexer0, ndarray[np.uint8_t, ndim=1] mask,
for b from 0 <= b < n_blocks:
- v = values[b][:, i]
+ v = values[b][:, i]
+ if searchable[b]:
+ v = v[0]
PyTuple_SET_ITEM(tup, b+1, v)
Py_INCREF(v)
@@ -785,14 +817,15 @@ def create_hdf_rows_2d(ndarray indexer0, ndarray[np.uint8_t, ndim=1] mask,
@cython.boundscheck(False)
@cython.wraparound(False)
def create_hdf_rows_3d(ndarray indexer0, ndarray indexer1,
- ndarray[np.uint8_t, ndim=2] mask, list values):
+ ndarray[np.uint8_t, ndim=2] mask,
+ ndarray[np.uint8_t, ndim=1] searchable,
+ list values):
""" return a list of objects ready to be converted to rec-array format """
cdef:
int i, j, b, n_indexer0, n_indexer1, n_blocks, tup_size
- ndarray v
list l
- object tup, val
+ object tup, val, v
n_indexer0 = indexer0.shape[0]
n_indexer1 = indexer1.shape[0]
@@ -818,6 +851,8 @@ def create_hdf_rows_3d(ndarray indexer0, ndarray indexer1,
for b from 0 <= b < n_blocks:
v = values[b][:, i, j]
+ if searchable[b]:
+ v = v[0]
PyTuple_SET_ITEM(tup, b+2, v)
Py_INCREF(v)
@@ -828,14 +863,15 @@ def create_hdf_rows_3d(ndarray indexer0, ndarray indexer1,
@cython.boundscheck(False)
@cython.wraparound(False)
def create_hdf_rows_4d(ndarray indexer0, ndarray indexer1, ndarray indexer2,
- ndarray[np.uint8_t, ndim=3] mask, list values):
+ ndarray[np.uint8_t, ndim=3] mask,
+ ndarray[np.uint8_t, ndim=1] searchable,
+ list values):
""" return a list of objects ready to be converted to rec-array format """
cdef:
int i, j, k, b, n_indexer0, n_indexer1, n_indexer2, n_blocks, tup_size
- ndarray v
list l
- object tup, val
+ object tup, val, v
n_indexer0 = indexer0.shape[0]
n_indexer1 = indexer1.shape[0]
@@ -868,6 +904,8 @@ def create_hdf_rows_4d(ndarray indexer0, ndarray indexer1, ndarray indexer2,
for b from 0 <= b < n_blocks:
v = values[b][:, i, j, k]
+ if searchable[b]:
+ v = v[0]
PyTuple_SET_ITEM(tup, b+3, v)
Py_INCREF(v)
diff --git a/vb_suite/hdfstore_bench.py b/vb_suite/hdfstore_bench.py
index d43d8b60a9cf0..23303f335af7e 100644
--- a/vb_suite/hdfstore_bench.py
+++ b/vb_suite/hdfstore_bench.py
@@ -220,3 +220,33 @@ def remove(f):
query_store_table = Benchmark("store.select('df12', [ ('index', '>', df.index[10000]), ('index', '<', df.index[15000]) ])", setup12, cleanup = "store.close()",
start_date=start_date)
+#----------------------------------------------------------------------
+# select from a panel table
+
+setup13 = common_setup + """
+p = Panel(randn(20, 1000, 1000), items= [ 'Item%03d' % i for i in xrange(20) ],
+ major_axis=date_range('1/1/2000', periods=1000), minor_axis = [ 'E%03d' % i for i in xrange(1000) ])
+
+remove(f)
+store = HDFStore(f)
+store.append('p1',p)
+"""
+
+read_store_table_panel = Benchmark("store.select('p1')", setup13, cleanup = "store.close()",
+ start_date=start_date)
+
+
+#----------------------------------------------------------------------
+# write to a panel table
+
+setup14 = common_setup + """
+p = Panel(randn(20, 1000, 1000), items= [ 'Item%03d' % i for i in xrange(20) ],
+ major_axis=date_range('1/1/2000', periods=1000), minor_axis = [ 'E%03d' % i for i in xrange(1000) ])
+
+remove(f)
+store = HDFStore(f)
+"""
+
+write_store_table_panel = Benchmark("store.append('p2',p)", setup14, cleanup = "store.close()",
+ start_date=start_date)
+
| - can construct searches on the actual columns of the data, by passing keyword `data_columns` to `append`
e.g. store.select('df_dc',[ Term('B>0'), Term('string=foo') ])
(where B and string are columns in the frame)
- added `nan_rep` for supporting string columns with nan's in them
- corectly determine dtypes of data columns that we cannot deal with (date,unicode), raise UnImplementedError
- added support for datetime64 in columns
- added method `unique` to fast retrieval of indexables or data columns
- added keyword `index=True` to automagically create indicies on all indexables and data columns!
- performance enhancements on string columns
- added `chunksize` parameter to append to allow write chunking, significantly lower memory usage on writes
- added `expectedrows` parameter to append to allow specification of the TOTAL expected rows in a table (to optimize performance)
- added `start/stop` parameters to select to allow limiting of the selection space
- added multi-index dataframe support (closes GH #1277 )
- added `append_to_multple`, `select_as_multiple` and `select_as_coordinates` methods to support multiple-table creation & selection
- removed compression kw from put, replaced with complib (to make more consistent across library)
- Term parsing a little more robust
- more tests & docs for data columns
- added whatsnew 0.10.1 docs
| https://api.github.com/repos/pandas-dev/pandas/pulls/2561 | 2012-12-18T23:06:59Z | 2012-12-28T15:07:48Z | null | 2014-06-12T15:36:20Z |
Update default fillna method in the docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index fbf5f0eecccc2..f6d8298aa51ff 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3233,7 +3233,7 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
Parameters
----------
- method : {'backfill', 'bfill', 'pad', 'ffill', None}, default 'pad'
+ method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
Method to use for filling holes in reindexed Series
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use NEXT valid observation to fill gap
| https://api.github.com/repos/pandas-dev/pandas/pulls/2558 | 2012-12-18T00:20:23Z | 2012-12-18T00:39:56Z | 2012-12-18T00:39:55Z | 2012-12-18T00:40:05Z | |
Remove default method 'pad' from fillna docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index fbf5f0eecccc2..5c082fd1c0fdb 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3233,7 +3233,7 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
Parameters
----------
- method : {'backfill', 'bfill', 'pad', 'ffill', None}, default 'pad'
+ method : {'backfill', 'bfill', 'pad', 'ffill', None}
Method to use for filling holes in reindexed Series
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use NEXT valid observation to fill gap
| https://api.github.com/repos/pandas-dev/pandas/pulls/2557 | 2012-12-17T23:56:18Z | 2012-12-18T00:21:16Z | null | 2012-12-18T00:21:16Z | |
ENH: support for nanosecond time in offset and period | This is my attempt to fix the so far missing nanosecond support for pd.DatetimeIndex (see issue #1812).
The Python modification should be relatively easy to review, but unfortunately I had to do a bigger refactoring of period.c in order to support subsecond times.
Im still working on adding more tests and I will try to include the tests on C level as well (test_period.c, which can only be used standalone so far).
I'm looking forward to your remarks and I will be happy to modify the code to fit it into the project.
Cheers
Andreas
| https://api.github.com/repos/pandas-dev/pandas/pulls/2555 | 2012-12-17T20:13:15Z | 2013-03-14T11:37:05Z | 2013-03-14T11:37:04Z | 2014-06-21T02:43:58Z | |
New Pandas Cheat Sheet | diff --git a/doc/source/cheatsheet.txt b/doc/source/cheatsheet.txt
new file mode 100644
index 0000000000000..90fe804a9d552
--- /dev/null
+++ b/doc/source/cheatsheet.txt
@@ -0,0 +1,106 @@
+#
+# Pandas cheatsheet
+# - created during the PyData Workshop-Sprint 2012
+# - Hannah Chen, Henry Chow, Eric Cox, Robert Mauriello
+#
+
+Creating Pandas Objects S/D/P/G
+
+Series(data,index=index) S
+DataFrame(data,index=index,columns=column) D
+Panel(data,items=items,major_axis=index,minor_axis=column P
+
+Deleting
+
+del df S/D/P # deletes pandas object
+del df[col] D # deletes column
+
+
+Indexing/Selection
+
+df[col] S/D/P # Select column
+df.xs(label) or df.ix[label] S/D/P # Select row by label
+df.ix[loc] S/D/P # Select row by location (int)
+df[5:10] S/D/P # Slice rows
+df[bool_vec] S/D/P # Select rows by boolean vector
+df.reindex(columns=col1,col2...) S/D # Moves columns around
+
+Finding Data
+
+df.[col].str.replace(find.replace) S/D # Finding string and replacing
+
+Viewing Data
+
+df.head(x) S/D # See the top x in the df, default 5
+df.tail(x) S/D # See the bottom x in the df, default 5
+df.index S/D # gets the index of the series/dataframe
+df.values S/D # gets the value of the series/dataframe
+s.value_counts() S # Gets unique items and makes summary table
+
+Indexing and selecting data
+
+series[label] # selects data element
+df[col] == df.col # selects series
+panel[item] # selects dataFrame
+
+df.get_value(row, col) #
+df.set_value(row, col, valueToSet) #
+
+Accessing column attribute
+ df.attribute1
+In-place transformation
+ df[['att1', 'att2']] = df[['att2', 'att1']]
+
+Accessing row with xs function #
+df.xs(row) #
+panel.major_xs(row), panel.minor_xs(row) #
+
+Slicing (returns subset of data)
+s[n] == s.get(n) # nth index
+s[a:b] # values from [a,b]
+s[::n] every n values (e.g. s[::2] -> every other value)
+s[::-1] backwards
+
+s.where(query) -> returns same shape as original data with NaN fills
+s.mask(condition) -> Nan for values that meet condition
+
+df.drop_duplicates([col1,col2,...]) # delete duplicates using col as identifers
+
+Getting/Setting with Advanced indexing (ix function)
+
+df.ix[row, col] -> value at (row, col) of df
+df.ix[label1:label2] -> series from label1 to label2
+
+Set Operations
+a | b, a & b, a - b
+
+
+Concatenating and joining objects
+
+concat(objs) # Concatenates a list or dict of homogeneously-typed objects
+
+s1.append(s2) # Append series s2 to s1
+df1.append(df2) # Append dataframe df2 to df1 - indexes must be disjoint but the columns do not need to be.
+
+merge(left, right, how='left', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=True)
+
+left.join(right, on=keys) # left join of dataframe left and right using keys found in left (defaults to row labels)
+left.combine_first(right) # fill in missing values in left using right
+
+df.pivot(index='date', columns='variable', values='value') #
+
+
+
+date_range('1/1/2011', periods=72, freq='H') # define a date range
+ts.asfreq('45Min', method='pad') # changes frequency of ts to 45 minutes and fills gaps
+Series(randn(len(rng)), index=rng) # creates a series with index as a DateTimeIndex
+read_csv(path, index_col = ['Date', 'Time']) # reads a csv with date time data
+read_csv(path, parse_dates={'ts': ['Date', 'Time']}, index_col='ts') parses dates and times and makes a DateTimeIndex
+ts.index # gets the DateTimeIndex for a time series DataFrame
+
+rs.ix['2011-11-01 08:00:00':'2011-11-01 10:00:00'] # returns data for nearest time matching that interval
+ts['1/31/2011'] # gets data for a specific entry
+ts[datetime(2011, 12, 25):] # gets data for a date time for all columns
+ts['10/31/2011':'12/31/2011'] # gets data for all data between two dates
+rs.between_time('08:00', '10:00') # extracts data for every day between start and end hours
+PeriodIndex([Period('2012-01'), Period('2012-02'), Period('2012-03')]) #defines a period index instead of DateTimeIndex
\ No newline at end of file
| Pandas cheatsheet
- created during the PyData Workshop-Sprint 2012
- Hannah Chen, Henry Chow, Eric Cox, Robert Mauriello
| https://api.github.com/repos/pandas-dev/pandas/pulls/2552 | 2012-12-17T00:56:32Z | 2013-07-29T05:13:49Z | null | 2017-03-02T10:36:16Z |
BUG: fixed DataFrame.to_records() error with datetime64, #1908 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index fbf5f0eecccc2..a633bcf96c975 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1066,8 +1066,11 @@ def to_records(self, index=True):
y : recarray
"""
if index:
- arrays = [self.index.values] + [self[c].values
- for c in self.columns]
+ if (com.is_datetime64_dtype(self.index)):
+ arrays = [self.index.asobject.values] + [self[c].values for c in self.columns]
+ else:
+ arrays = [self.index.values] + [self[c].values for c in self.columns]
+
count = 0
index_names = self.index.names
if isinstance(self.index, MultiIndex):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index b7d4e5908348f..3bbdc1b4e2ec6 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2668,6 +2668,11 @@ def test_to_json_except(self):
df = DataFrame([1, 2, 3])
self.assertRaises(ValueError, df.to_json, orient="garbage")
+ def test_array_index_asobject(self):
+ df = DataFrame([["one", "two", "three"], ["four", "five", "six"]], index=pan.date_range("2012-01-01", "2012-01-02"))
+ self.assert_(df.to_records()['index'][0] == df.index[0])
+
+
def test_from_records_to_records(self):
# from numpy documentation
arr = np.zeros((2,),dtype=('i4,f4,a10'))
| https://api.github.com/repos/pandas-dev/pandas/pulls/2551 | 2012-12-16T22:30:37Z | 2012-12-18T00:34:24Z | null | 2012-12-18T00:38:42Z | |
BUG: Pytables minor changes 2 | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index a507ca83c382d..175845fd38a2b 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -959,10 +959,10 @@ def infer(self, table):
new_self.get_attr()
return new_self
- def convert(self, sel):
+ def convert(self, values):
""" set the values from this selection """
- self.values = _maybe_convert(sel.values[self.cname], self.kind)
-
+ self.values = Index(_maybe_convert(values[self.cname], self.kind))
+
@property
def attrs(self):
return self.table._v_attrs
@@ -1099,9 +1099,9 @@ def validate_attr(self, append):
raise Exception("appended items dtype do not match existing items dtype"
" in table!")
- def convert(self, sel):
+ def convert(self, values):
""" set the data from this selection (and convert to the correct dtype if we can) """
- self.set_data(sel.values[self.cname])
+ self.set_data(values[self.cname])
# convert to the correct dtype
if self.dtype is not None:
@@ -1349,11 +1349,11 @@ def read_axes(self, where):
# create the selection
self.selection = Selection(self, where)
- self.selection.select()
+ values = self.selection.select()
# convert the data
for a in self.axes:
- a.convert(self.selection)
+ a.convert(values)
return True
@@ -1481,6 +1481,22 @@ def create_axes(self, axes, obj, validate = True, min_itemsize = None):
if validate:
self.validate(existing_table)
+ def process_axes(self, obj):
+ """ process axes filters """
+
+ def reindex(obj, axis, filt, ordered):
+ axis_name = obj._get_axis_name(axis)
+ ordd = ordered & filt
+ ordd = sorted(ordered.get_indexer(ordd))
+ return obj.reindex_axis(ordered.take(ordd), axis = obj._get_axis_number(axis_name), copy = False)
+
+ # apply the selection filters (but keep in the same order)
+ if self.selection.filter:
+ for axis, filt in self.selection.filter:
+ obj = reindex(obj, axis, filt, getattr(obj,obj._get_axis_name(axis)))
+
+ return obj
+
def create_description(self, compression = None, complevel = None):
""" create the description of the table from the axes & values """
@@ -1556,8 +1572,7 @@ def read(self, where=None):
if not self.read_axes(where): return None
- indicies = [ i.values for i in self.index_axes ]
- factors = [ Categorical.from_array(i) for i in indicies ]
+ factors = [ Categorical.from_array(a.values) for a in self.index_axes ]
levels = [ f.levels for f in factors ]
N = [ len(f.levels) for f in factors ]
labels = [ f.labels for f in factors ]
@@ -1597,7 +1612,8 @@ def read(self, where=None):
'appended')
# reconstruct
- long_index = MultiIndex.from_arrays(indicies)
+ long_index = MultiIndex.from_arrays([ i.values for i in self.index_axes ])
+
for c in self.values_axes:
lp = DataFrame(c.data, index=long_index, columns=c.values)
@@ -1627,12 +1643,8 @@ def read(self, where=None):
for axis,labels in self.non_index_axes:
wp = wp.reindex_axis(labels,axis=axis,copy=False)
- # apply the selection filters (but keep in the same order)
- if self.selection.filter:
- filter_axis_name = wp._get_axis_name(self.non_index_axes[0][0])
- ordered = getattr(wp,filter_axis_name)
- new_axis = sorted(ordered & self.selection.filter)
- wp = wp.reindex(**{ filter_axis_name : new_axis, 'copy' : False })
+ # apply the selection filters & axis orderings
+ wp = self.process_axes(wp)
return wp
@@ -1736,10 +1748,10 @@ def delete(self, where = None):
# create the selection
table = self.table
self.selection = Selection(self, where)
- self.selection.select_coords()
+ values = self.selection.select_coords()
# delete the rows in reverse order
- l = Series(self.selection.values).order()
+ l = Series(values).order()
ln = len(l)
if ln:
@@ -1792,7 +1804,7 @@ def read(self, where=None):
if not self.read_axes(where): return None
- index = Index(self.index_axes[0].values)
+ index = self.index_axes[0].values
frames = []
for a in self.values_axes:
columns = Index(a.values)
@@ -1815,16 +1827,8 @@ def read(self, where=None):
for axis,labels in self.non_index_axes:
df = df.reindex_axis(labels,axis=axis,copy=False)
- # apply the selection filters (but keep in the same order)
- filter_axis_name = df._get_axis_name(self.non_index_axes[0][0])
-
- ordered = getattr(df,filter_axis_name)
- if self.selection.filter:
- ordd = ordered & self.selection.filter
- ordd = sorted(ordered.get_indexer(ordd))
- df = df.reindex(**{ filter_axis_name : ordered.take(ordd), 'copy' : False })
- else:
- df = df.reindex(**{ filter_axis_name : ordered , 'copy' : False })
+ # apply the selection filters & axis orderings
+ df = self.process_axes(df)
return df
@@ -2185,11 +2189,11 @@ def eval(self):
# use a filter after reading
else:
- self.filter = set([ v[1] for v in values ])
+ self.filter = (self.field,Index([ v[1] for v in values ]))
else:
- self.filter = set([ v[1] for v in values ])
+ self.filter = (self.field,Index([ v[1] for v in values ]))
else:
@@ -2234,7 +2238,6 @@ class Selection(object):
def __init__(self, table, where=None):
self.table = table
self.where = where
- self.values = None
self.condition = None
self.filter = None
self.terms = self.generate(where)
@@ -2244,10 +2247,10 @@ def __init__(self, table, where=None):
conds = [ t.condition for t in self.terms if t.condition is not None ]
if len(conds):
self.condition = "(%s)" % ' & '.join(conds)
- self.filter = set()
+ self.filter = []
for t in self.terms:
if t.filter is not None:
- self.filter |= t.filter
+ self.filter.append(t.filter)
def generate(self, where):
""" where can be a : dict,list,tuple,string """
@@ -2268,15 +2271,15 @@ def select(self):
generate the selection
"""
if self.condition is not None:
- self.values = self.table.table.readWhere(self.condition)
+ return self.table.table.readWhere(self.condition)
else:
- self.values = self.table.table.read()
+ return self.table.table.read()
def select_coords(self):
"""
generate the selection
"""
- self.values = self.table.table.getWhereList(self.condition, sort = True)
+ return self.table.table.getWhereList(self.condition, sort = True)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index e751025444e16..2b0d1cda89392 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1080,13 +1080,26 @@ def test_select(self):
wp = tm.makePanel()
# put/select ok
+ self.store.remove('wp')
self.store.put('wp', wp, table=True)
self.store.select('wp')
# non-table ok (where = None)
+ self.store.remove('wp')
self.store.put('wp2', wp, table=False)
self.store.select('wp2')
+ # selection on the non-indexable with a large number of columns
+ wp = Panel(np.random.randn(100, 100, 100), items = [ 'Item%03d' % i for i in xrange(100) ],
+ major_axis=date_range('1/1/2000', periods=100), minor_axis = [ 'E%03d' % i for i in xrange(100) ])
+
+ self.store.remove('wp')
+ self.store.append('wp', wp)
+ items = [ 'Item%03d' % i for i in xrange(80) ]
+ result = self.store.select('wp', Term('items', items))
+ expected = wp.reindex(items = items)
+ tm.assert_panel_equal(expected, result)
+
# selectin non-table with a where
#self.assertRaises(Exception, self.store.select,
# 'wp2', ('column', ['A', 'D']))
| ordering in returned data on an index axis if the selection (Term) has a lot of values
e.g. had > 61 fields that were specified in a Term (weird but true)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2545 | 2012-12-16T00:50:02Z | 2012-12-16T17:34:32Z | null | 2014-06-26T11:25:39Z |
Fix compilation on MSVC, signbit not in math.h | diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index 62a38cd78c3b3..0d7006f08111b 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -40,7 +40,7 @@ cdef inline int int_max(int a, int b): return a if a >= b else b
cdef inline int int_min(int a, int b): return a if a <= b else b
-cdef extern from "math.h":
+cdef extern from "src/headers/math.h":
double sqrt(double x)
double fabs(double)
int signbit(double)
diff --git a/pandas/src/headers/math.h b/pandas/src/headers/math.h
new file mode 100644
index 0000000000000..8ccf11d07c3fe
--- /dev/null
+++ b/pandas/src/headers/math.h
@@ -0,0 +1,11 @@
+#ifndef _PANDAS_MATH_H_
+#define _PANDAS_MATH_H_
+
+#if defined(_MSC_VER)
+#include <math.h>
+__inline int signbit(double num) { return _copysign(1.0, num) < 0; }
+#else
+#include <math.h>
+#endif
+
+#endif
| Another MSVC hack, this time to deal with the fact that signbit is not in math.h.
By using the _copysign, you can avoid potential issues with "negative zeros"
| https://api.github.com/repos/pandas-dev/pandas/pulls/2540 | 2012-12-15T03:36:15Z | 2012-12-15T21:09:53Z | null | 2014-06-27T11:49:27Z |
BUG: pytables minor changes | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 6d3d2996453a1..eac64eae1e5ab 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1140,7 +1140,7 @@ Queries are built up using a list of ``Terms`` (currently only **anding** of ter
store.append('wp',wp)
store
- store.select('wp',[ 'major_axis>20000102', ('minor_axis', '=', ['A','B']) ])
+ store.select('wp',[ Term('major_axis>20000102'), Term('minor_axis', '=', ['A','B']) ])
Indexing
~~~~~~~~
@@ -1161,6 +1161,21 @@ You can create an index for a table with ``create_table_index`` after data is al
Delete from a Table
~~~~~~~~~~~~~~~~~~~
+You can delete from a table selectively by specifying a ``where``. In deleting rows, it is important to understand the ``PyTables`` deletes rows by erasing the rows, then **moving** the following data. Thus deleting can potentially be a very expensive operation depending on the orientation of your data. This is especially true in higher dimensional objects (``Panel`` and ``Panel4D``). To get optimal deletion speed, it pays to have the dimension you are deleting be the first of the ``indexables``.
+
+Data is ordered (on the disk) in terms of the ``indexables``. Here's a simple use case. You store panel type data, with dates in the ``major_axis`` and ids in the ``minor_axis``. The data is then interleaved like this:
+
+ - date_1
+ - id_1
+ - id_2
+ - .
+ - id_n
+ - date_2
+ - id_1
+ - .
+ - id_n
+
+It should be clear that a delete operation on the ``major_axis`` will be fairly quick, as one chunk is removed, then the following data moved. On the other hand a delete operation on the ``minor_axis`` will be very expensive. In this case it would almost certainly be faster to rewrite the table using a ``where`` that selects all but the missing data.
.. ipython:: python
@@ -1205,7 +1220,6 @@ Performance
- ``AppendableTable`` which is a similiar table to past versions (this is the default).
- ``WORMTable`` (pending implementation) - is available to faciliate very fast writing of tables that are also queryable (but CANNOT support appends)
- - To delete a lot of data, it is sometimes better to erase the table and rewrite it. ``PyTables`` tends to increase the file size with deletions
- ``Tables`` offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
use the pytables utilities ``ptrepack`` to rewrite the file (and also can change compression methods)
- Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
@@ -1226,7 +1240,6 @@ These, by default, index the three axes ``items, major_axis, minor_axis``. On an
.. ipython:: python
- from pandas.io.pytables import Term
store.append('p4d2', p4d, axes = ['labels','major_axis','minor_axis'])
store
store.select('p4d2', [ Term('labels=l1'), Term('items=Item1'), Term('minor_axis=A_big_strings') ])
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index f92691f014587..5316458edd94c 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -255,7 +255,6 @@ Updated PyTables Support
.. ipython:: python
- from pandas.io.pytables import Term
wp = Panel(randn(2, 5, 4), items=['Item1', 'Item2'],
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
diff --git a/pandas/__init__.py b/pandas/__init__.py
index dcecc1ccd408d..1d45727257eeb 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -32,7 +32,7 @@
from pandas.io.parsers import (read_csv, read_table, read_clipboard,
read_fwf, to_clipboard, ExcelFile,
ExcelWriter)
-from pandas.io.pytables import HDFStore
+from pandas.io.pytables import HDFStore, Term
from pandas.util.testing import debug
from pandas.tools.describe import value_range
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 91bd27ff510ef..a507ca83c382d 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -20,9 +20,9 @@
from pandas.sparse.array import BlockIndex, IntIndex
from pandas.tseries.api import PeriodIndex, DatetimeIndex
from pandas.core.common import adjoin
-from pandas.core.algorithms import match, unique
+from pandas.core.algorithms import match, unique, factorize
from pandas.core.strings import str_len
-from pandas.core.categorical import Factor
+from pandas.core.categorical import Categorical
from pandas.core.common import _asarray_tuplesafe, _try_sort
from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.reshape import block2d_to_block3d, block2d_to_blocknd, factor_indexer
@@ -1174,15 +1174,17 @@ def copy(self):
new_self = copy.copy(self)
return new_self
- def __eq__(self, other):
- """ return True if we are 'equal' to this other table (in all respects that matter) """
+ def validate(self, other):
+ """ validate against an existing table """
+ if other is None: return
+
+ if other.table_type != self.table_type:
+ raise TypeError("incompatible table_type with existing [%s - %s]" %
+ (other.table_type, self.table_type))
+
for c in ['index_axes','non_index_axes','values_axes']:
if getattr(self,c,None) != getattr(other,c,None):
- return False
- return True
-
- def __ne__(self, other):
- return not self.__eq__(other)
+ raise Exception("invalid combinate of [%s] on appending data [%s] vs current table [%s]" % (c,getattr(self,c,None),getattr(other,c,None)))
@property
def nrows(self):
@@ -1263,17 +1265,6 @@ def validate_version(self, where = None):
if self.version is None or float(self.version) < 0.1:
warnings.warn("where criteria is being ignored as we this version is too old (or not-defined) [%s]" % self.version, IncompatibilityWarning)
- def validate(self):
- """ raise if we have an incompitable table type with the current """
- et = getattr(self.attrs,'table_type',None)
- if et is not None and et != self.table_type:
- raise TypeError("incompatible table_type with existing [%s - %s]" %
- (et, self.table_type))
- ic = getattr(self.attrs,'index_cols',None)
- if ic is not None and ic != self.index_cols():
- raise TypeError("incompatible index cols with existing [%s - %s]" %
- (ic, self.index_cols()))
-
@property
def indexables(self):
""" create/cache the indexables if they don't exist """
@@ -1390,7 +1381,7 @@ def create_axes(self, axes, obj, validate = True, min_itemsize = None):
"""
# map axes to numbers
- axes = set([ obj._get_axis_number(a) for a in axes ])
+ axes = [ obj._get_axis_number(a) for a in axes ]
# do we have an existing table (if so, use its axes)?
if self.infer_axes():
@@ -1404,26 +1395,42 @@ def create_axes(self, axes, obj, validate = True, min_itemsize = None):
raise Exception("currenctly only support ndim-1 indexers in an AppendableTable")
# create according to the new data
- self.index_axes = []
self.non_index_axes = []
# create axes to index and non_index
- j = 0
+ index_axes_map = dict()
for i, a in enumerate(obj.axes):
if i in axes:
name = obj._AXIS_NAMES[i]
- self.index_axes.append(_convert_index(a).set_name(name).set_axis(i).set_pos(j))
- j += 1
-
+ index_axes_map[i] = _convert_index(a).set_name(name).set_axis(i)
else:
- self.non_index_axes.append((i,list(a)))
+
+ # we might be able to change the axes on the appending data if necessary
+ append_axis = list(a)
+ if existing_table is not None:
+ indexer = len(self.non_index_axes)
+ exist_axis = existing_table.non_index_axes[indexer][1]
+ if append_axis != exist_axis:
+
+ # ahah! -> reindex
+ if sorted(append_axis) == sorted(exist_axis):
+ append_axis = exist_axis
+
+ self.non_index_axes.append((i,append_axis))
+
+ # set axis positions (based on the axes)
+ self.index_axes = [ index_axes_map[a].set_pos(j) for j, a in enumerate(axes) ]
+ j = len(self.index_axes)
# check for column conflicts
if validate:
for a in self.axes:
a.maybe_set_size(min_itemsize = min_itemsize)
+ # reindex by our non_index_axes
+ for a in self.non_index_axes:
+ obj = obj.reindex_axis(a[1], axis = a[0], copy = False)
blocks = self.get_data_blocks(obj)
@@ -1471,9 +1478,8 @@ def create_axes(self, axes, obj, validate = True, min_itemsize = None):
self.values_axes.append(dc)
# validate the axes if we have an existing table
- if existing_table is not None:
- if self != existing_table:
- raise Exception("try to write axes [%s] that are invalid to an existing table [%s]!" % (axes,self.group))
+ if validate:
+ self.validate(existing_table)
def create_description(self, compression = None, complevel = None):
""" create the description of the table from the axes & values """
@@ -1548,14 +1554,10 @@ def read(self, where=None):
""" we have n indexable columns, with an arbitrary number of data axes """
- _dm = create_debug_memory(self.parent)
- _dm('start')
-
if not self.read_axes(where): return None
- _dm('read_axes')
indicies = [ i.values for i in self.index_axes ]
- factors = [ Factor.from_array(i) for i in indicies ]
+ factors = [ Categorical.from_array(i) for i in indicies ]
levels = [ f.levels for f in factors ]
N = [ len(f.levels) for f in factors ]
labels = [ f.labels for f in factors ]
@@ -1577,9 +1579,7 @@ def read(self, where=None):
take_labels = [ l.take(sorter) for l in labels ]
items = Index(c.values)
- _dm('pre block')
block = block2d_to_blocknd(sorted_values, items, tuple(N), take_labels)
- _dm('block created done')
# create the object
mgr = BlockManager([block], [items] + levels)
@@ -1587,7 +1587,7 @@ def read(self, where=None):
# permute if needed
if self.is_transposed:
- obj = obj.transpose(*self.data_orientation)
+ obj = obj.transpose(*tuple(Series(self.data_orientation).argsort()))
objs.append(obj)
@@ -1617,16 +1617,12 @@ def read(self, where=None):
lp = DataFrame(new_values, index=new_index, columns=lp.columns)
objs.append(lp.to_panel())
- _dm('pre-concat')
-
# create the composite object
if len(objs) == 1:
wp = objs[0]
else:
wp = concat(objs, axis = 0, verify_integrity = True)
- _dm('post-concat')
-
# reorder by any non_index_axes
for axis,labels in self.non_index_axes:
wp = wp.reindex_axis(labels,axis=axis,copy=False)
@@ -1638,8 +1634,6 @@ def read(self, where=None):
new_axis = sorted(ordered & self.selection.filter)
wp = wp.reindex(**{ filter_axis_name : new_axis, 'copy' : False })
- _dm('done')
-
return wp
class LegacyFrameTable(LegacyTable):
@@ -1682,13 +1676,8 @@ def write(self, axes, obj, append=False, compression=None,
table = self.handle.createTable(self.group, **options)
else:
-
- # the table must already exist
table = self.table
- # validate the table
- self.validate()
-
# validate the axes and set the kinds
for a in self.axes:
a.validate_and_set(table, append)
@@ -1750,22 +1739,35 @@ def delete(self, where = None):
self.selection.select_coords()
# delete the rows in reverse order
- l = list(self.selection.values)
+ l = Series(self.selection.values).order()
ln = len(l)
if ln:
- # if we can do a consecutive removal - do it!
- if l[0]+ln-1 == l[-1]:
- table.removeRows(start = l[0], stop = l[-1]+1)
+ # construct groups of consecutive rows
+ diff = l.diff()
+ groups = list(diff[diff>1].index)
- # one by one
- else:
- l.reverse()
- for c in l:
- table.removeRows(c)
+ # 1 group
+ if not len(groups):
+ groups = [0]
+
+ # final element
+ if groups[-1] != ln:
+ groups.append(ln)
+
+ # initial element
+ if groups[0] != 0:
+ groups.insert(0,0)
- self.handle.flush()
+ # we must remove in reverse order!
+ pg = groups.pop()
+ for g in reversed(groups):
+ rows = l.take(range(g,pg))
+ table.removeRows(start = rows[rows.index[0]], stop = rows[rows.index[-1]]+1)
+ pg = g
+
+ self.handle.flush()
# return the number of rows removed
return ln
@@ -2285,22 +2287,3 @@ def f(values, freq=None, tz=None):
return f
return klass
-def create_debug_memory(parent):
- _debug_memory = getattr(parent,'_debug_memory',False)
- def get_memory(s):
- pass
-
- if not _debug_memory:
- pass
- else:
- try:
- import psutil, os
- def get_memory(s):
- p = psutil.Process(os.getpid())
- (rss,vms) = p.get_memory_info()
- mp = p.get_memory_percent()
- print "[%s] cur_mem->%.2f (MB),per_mem->%.2f" % (s,rss/1000000.0,mp)
- except:
- pass
-
- return get_memory
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index a047109e509a9..e751025444e16 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -271,6 +271,23 @@ def test_append(self):
self.store.append('p4d', p4d.ix[:,:,10:,:], axes=['items','major_axis','minor_axis'])
tm.assert_panel4d_equal(self.store['p4d'], p4d)
+ # test using differnt number of items on each axis
+ p4d2 = p4d.copy()
+ p4d2['l4'] = p4d['l1']
+ p4d2['l5'] = p4d['l1']
+ self.store.remove('p4d2')
+ self.store.append('p4d2', p4d2, axes=['items','major_axis','minor_axis'])
+ tm.assert_panel4d_equal(self.store['p4d2'], p4d2)
+
+ # test using differt order of items on the non-index axes
+ self.store.remove('wp1')
+ wp_append1 = wp.ix[:,:10,:]
+ self.store.append('wp1', wp_append1)
+ wp_append2 = wp.ix[:,10:,:].reindex(items = wp.items[::-1])
+ self.store.append('wp1', wp_append2)
+ tm.assert_panel_equal(self.store['wp1'], wp)
+
+
def test_append_frame_column_oriented(self):
# column oriented
@@ -297,20 +314,45 @@ def test_ndim_indexables(self):
p4d = tm.makePanel4D()
+ def check_indexers(key, indexers):
+ for i,idx in enumerate(indexers):
+ self.assert_(getattr(getattr(self.store.root,key).table.description,idx)._v_pos == i)
+
# append then change (will take existing schema)
+ indexers = ['items','major_axis','minor_axis']
+
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:], axes=['items','major_axis','minor_axis'])
+ self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
+ self.store.append('p4d', p4d.ix[:,:,10:,:])
+ tm.assert_panel4d_equal(self.store.select('p4d'),p4d)
+ check_indexers('p4d',indexers)
+
+ # same as above, but try to append with differnt axes
+ self.store.remove('p4d')
+ self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
self.store.append('p4d', p4d.ix[:,:,10:,:], axes=['labels','items','major_axis'])
+ tm.assert_panel4d_equal(self.store.select('p4d'),p4d)
+ check_indexers('p4d',indexers)
# pass incorrect number of axes
self.store.remove('p4d')
self.assertRaises(Exception, self.store.append, 'p4d', p4d.ix[:,:,:10,:], axes=['major_axis','minor_axis'])
- # different than default indexables
+ # different than default indexables #1
+ indexers = ['labels','major_axis','minor_axis']
self.store.remove('p4d')
- self.store.append('p4d', p4d.ix[:,:,:10,:], axes=[0,2,3])
- self.store.append('p4d', p4d.ix[:,:,10:,:], axes=[0,2,3])
+ self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
+ self.store.append('p4d', p4d.ix[:,:,10:,:])
+ tm.assert_panel4d_equal(self.store['p4d'], p4d)
+ check_indexers('p4d',indexers)
+
+ # different than default indexables #2
+ indexers = ['major_axis','labels','minor_axis']
+ self.store.remove('p4d')
+ self.store.append('p4d', p4d.ix[:,:,:10,:], axes=indexers)
+ self.store.append('p4d', p4d.ix[:,:,10:,:])
tm.assert_panel4d_equal(self.store['p4d'], p4d)
+ check_indexers('p4d',indexers)
# partial selection
result = self.store.select('p4d',['labels=l1'])
@@ -451,6 +493,8 @@ def test_big_table(self):
os.remove(self.scratchpath)
def test_append_diff_item_order(self):
+ raise nose.SkipTest('append diff item order')
+
wp = tm.makePanel()
wp1 = wp.ix[:, :10, :]
wp2 = wp.ix[['ItemC', 'ItemB', 'ItemA'], 10:, :]
@@ -597,6 +641,18 @@ def test_remove_where(self):
def test_remove_crit(self):
wp = tm.makePanel()
+
+ # group row removal
+ date4 = wp.major_axis.take([ 0,1,2,4,5,6,8,9,10 ])
+ crit4 = Term('major_axis',date4)
+ self.store.put('wp3', wp, table=True)
+ n = self.store.remove('wp3', where=[crit4])
+ assert(n == 36)
+ result = self.store.select('wp3')
+ expected = wp.reindex(major_axis = wp.major_axis-date4)
+ tm.assert_panel_equal(result, expected)
+
+ # upper half
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
@@ -604,7 +660,6 @@ def test_remove_crit(self):
crit2 = Term('minor_axis',['A', 'D'])
n = self.store.remove('wp', where=[crit1])
- # deleted number
assert(n == 56)
n = self.store.remove('wp', where=[crit2])
@@ -614,32 +669,36 @@ def test_remove_crit(self):
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
tm.assert_panel_equal(result, expected)
- # test non-consecutive row removal
- wp = tm.makePanel()
+ # individual row elements
self.store.put('wp2', wp, table=True)
date1 = wp.major_axis[1:3]
- date2 = wp.major_axis[5]
- date3 = [wp.major_axis[7],wp.major_axis[9]]
-
crit1 = Term('major_axis',date1)
- crit2 = Term('major_axis',date2)
- crit3 = Term('major_axis',date3)
-
self.store.remove('wp2', where=[crit1])
+ result = self.store.select('wp2')
+ expected = wp.reindex(major_axis=wp.major_axis-date1)
+ tm.assert_panel_equal(result, expected)
+
+ date2 = wp.major_axis[5]
+ crit2 = Term('major_axis',date2)
self.store.remove('wp2', where=[crit2])
- self.store.remove('wp2', where=[crit3])
result = self.store['wp2']
+ expected = wp.reindex(major_axis=wp.major_axis-date1-Index([date2]))
+ tm.assert_panel_equal(result, expected)
- ma = list(wp.major_axis)
- for d in date1:
- ma.remove(d)
- ma.remove(date2)
- for d in date3:
- ma.remove(d)
- expected = wp.reindex(major = ma)
+ date3 = [wp.major_axis[7],wp.major_axis[9]]
+ crit3 = Term('major_axis',date3)
+ self.store.remove('wp2', where=[crit3])
+ result = self.store['wp2']
+ expected = wp.reindex(major_axis=wp.major_axis-date1-Index([date2])-Index(date3))
tm.assert_panel_equal(result, expected)
+ # corners
+ self.store.put('wp4', wp, table=True)
+ n = self.store.remove('wp4', where=[Term('major_axis','>',wp.major_axis[-1])])
+ result = self.store.select('wp4')
+ tm.assert_panel_equal(result, wp)
+
def test_terms(self):
wp = tm.makePanel()
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index 6570ce7abb0ae..d904d86f183c3 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -754,7 +754,7 @@ def create_hdf_rows_2d(ndarray indexer0, ndarray[np.uint8_t, ndim=1] mask,
""" return a list of objects ready to be converted to rec-array format """
cdef:
- unsigned int i, b, n_indexer0, n_blocks, tup_size
+ int i, b, n_indexer0, n_blocks, tup_size
ndarray v
list l
object tup, val
@@ -789,7 +789,7 @@ def create_hdf_rows_3d(ndarray indexer0, ndarray indexer1,
""" return a list of objects ready to be converted to rec-array format """
cdef:
- unsigned int i, j, b, n_indexer0, n_indexer1, n_blocks, tup_size
+ int i, j, b, n_indexer0, n_indexer1, n_blocks, tup_size
ndarray v
list l
object tup, val
@@ -832,7 +832,7 @@ def create_hdf_rows_4d(ndarray indexer0, ndarray indexer1, ndarray indexer2,
""" return a list of objects ready to be converted to rec-array format """
cdef:
- unsigned int i, j, k, b, n_indexer0, n_indexer1, n_indexer2, n_blocks, tup_size
+ int i, j, k, b, n_indexer0, n_indexer1, n_indexer2, n_blocks, tup_size
ndarray v
list l
object tup, val
@@ -871,7 +871,7 @@ def create_hdf_rows_4d(ndarray indexer0, ndarray indexer1, ndarray indexer2,
PyTuple_SET_ITEM(tup, b+3, v)
Py_INCREF(v)
- l.append(tup)
+ l.append(tup)
return l
| - ENH: moved Term to top-level panas namespace (like HDFStore)
- BUG: writing a Panel4D in cython (lib.pyx)
- BUG: interpreting axes paremeter to orient data
- BUG/ENH: allow appending of a different order of non_index_axes data
which then is ordered according to on-disk data
- DOC: explanation of how to orient data to provide optimal append/delete operations
- BUG/TST: more complete coverage of ndim and non-default axes ordering
- removed debugging memory calls
| https://api.github.com/repos/pandas-dev/pandas/pulls/2536 | 2012-12-14T17:17:34Z | 2012-12-15T22:28:56Z | 2012-12-15T22:28:56Z | 2014-06-17T10:09:37Z |
ENH: df.select uses bool(crit(x)) rather then crit(x) | diff --git a/RELEASE.rst b/RELEASE.rst
index 93941ebf11450..0d4d6670c6591 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -170,6 +170,7 @@ pandas 0.10.0
- Optimize ``unstack`` memory usage by compressing indices (GH2278_)
- Fix HTML repr in IPython qtconsole if opening window is small (GH2275_)
- Escape more special characters in console output (GH2492_)
+ - df.select now invokes bool on the result of crit(x) (GH2487_)
**Bug fixes**
@@ -297,6 +298,7 @@ pandas 0.10.0
.. _GH2447: https://github.com/pydata/pandas/issues/2447
.. _GH2275: https://github.com/pydata/pandas/issues/2275
.. _GH2492: https://github.com/pydata/pandas/issues/2492
+.. _GH2487: https://github.com/pydata/pandas/issues/2487
.. _GH2273: https://github.com/pydata/pandas/issues/2273
.. _GH2266: https://github.com/pydata/pandas/issues/2266
.. _GH2038: https://github.com/pydata/pandas/issues/2038
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5355db8eaabcc..9c3c85d2a0d5f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -315,7 +315,7 @@ def select(self, crit, axis=0):
axis = self._get_axis(axis)
if len(axis) > 0:
- new_axis = axis[np.asarray([crit(label) for label in axis])]
+ new_axis = axis[np.asarray([bool(crit(label)) for label in axis])]
else:
new_axis = axis
| closes #2487
| https://api.github.com/repos/pandas-dev/pandas/pulls/2532 | 2012-12-14T03:27:16Z | 2012-12-14T03:41:44Z | null | 2014-06-23T02:50:20Z |
Weighted mean | diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index 664067c55222c..cc52f20b27e2d 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -64,6 +64,16 @@ tiebreakers = {
}
+ctypedef fused numeric_t:
+ float64_t
+ int64_t
+ int32_t
+
+ctypedef fused weight_t:
+ float64_t
+ int64_t
+ int32_t
+
# ctypedef fused pvalue_t:
# float64_t
# int64_t
@@ -613,7 +623,144 @@ def rank_2d_generic(object in_arr, axis=0, ties_method='average',
# result[i, j] = values[i, indexer[i, j]]
# return result
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def weighted_nanmean_1d(ndarray[numeric_t, ndim=1] arr,
+ ndarray[weight_t, ndim=1] weight,
+ bint skipna=True):
+ cdef:
+ Py_ssize_t i
+ numeric_t val
+ float64_t numer=0
+ weight_t wgt, denom=0
+
+ if len(arr) != len(weight):
+ raise ValueError('Data and weights must be same size')
+
+ if skipna:
+ for i in range(len(arr)):
+ val = arr[i]
+ wgt = weight[i]
+
+ if val == val and wgt == wgt:
+ numer += val * wgt
+ denom += wgt
+ else:
+ for i in range(len(arr)):
+ wgt = weight[i]
+ numer += arr[i] * wgt
+ denom += wgt
+
+ if denom == 0:
+ return np.nan
+
+ return numer / denom
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def weighted_nanmean_2d_1d_weights(ndarray[numeric_t, ndim=2] arr,
+ ndarray[weight_t, ndim=1] weight,
+ int axis=0,
+ bint skipna=True):
+
+ cdef:
+ Py_ssize_t i, len_x, len_y
+ numeric_t val
+ weight_t wgt
+ ndarray[double] numer
+ ndarray[weight_t] denom
+
+ len_x, len_y = (<object>arr).shape
+ axis_len = (len_x, len_y)[1 - axis]
+ other_len = (len_x, len_y)[axis]
+ if len(weight) != other_len:
+ raise ValueError('Weights must be same size as data along axis %d'
+ % axis)
+
+ numer = np.zeros(axis_len, dtype=float)
+ denom = np.zeros(axis_len, dtype=weight.dtype)
+
+ if arr.flags.f_contiguous: #outer loop columns
+ if axis == 0:
+ if skipna:
+ for j in range(len_y):
+ for i in range(len_x):
+ val = arr[i, j]
+ wgt = weight[i]
+
+ if val == val and wgt == wgt:
+ numer[j] += val * wgt
+ denom[j] += wgt
+ else:
+ for j in range(len_y):
+ for i in range(len_x):
+ wgt = weight[i]
+
+ numer[j] += arr[i, j] * wgt
+ denom[j] += wgt
+
+ else:
+ if skipna:
+ for j in range(len_y):
+ for i in range(len_x):
+ val = arr[i, j]
+ wgt = weight[j]
+
+ if val == val and wgt == wgt:
+ numer[i] += val * wgt
+ denom[i] += wgt
+ else:
+ for j in range(len_y):
+ for i in range(len_x):
+ wgt = weight[j]
+ val = arr[i, j]
+
+ numer[i] += val * wgt
+ denom[i] += wgt
+ else:
+ if axis == 0:
+ if skipna:
+ for i in range(len_x):
+ for j in range(len_y):
+ val = arr[i, j]
+ wgt = weight[i]
+
+ if val == val and wgt == wgt:
+ numer[j] += val * wgt
+ denom[j] += wgt
+ else:
+ for i in range(len_x):
+ for j in range(len_y):
+ wgt = weight[i]
+ numer[j] += arr[i, j] * wgt
+ denom[j] += wgt
+ else:
+ if skipna:
+ for i in range(len_x):
+ for j in range(len_y):
+ val = arr[i, j]
+ wgt = weight[j]
+
+ if val == val and wgt == wgt:
+ numer[i] += val * wgt
+ denom[i] += wgt
+
+ else:
+ for i in range(len_x):
+ for j in range(len_y):
+ wgt = weight[j]
+ numer[i] += arr[i, j] + wgt
+ denom[i] += wgt
+
+
+ for k in range(axis_len):
+ wgt = denom[k]
+ if wgt == 0:
+ numer[k] = NaN
+ else:
+ numer[k] /= denom[k]
+ return numer
@cython.wraparound(False)
@cython.boundscheck(False)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 95b5f397ae1be..9a9246221d97d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3710,7 +3710,7 @@ def combine_first(self, other):
return self.combine(other, combiner)
def update(self, other, join='left', overwrite=True, filter_func=None,
- raise_conflict=False):
+ raise_conflict=False):
"""
Modify DataFrame in place using non-NA values from passed
DataFrame. Aligns on indices
@@ -4679,15 +4679,41 @@ def sum(self, axis=0, numeric_only=None, skipna=True, level=None):
return self._reduce(nanops.nansum, axis=axis, skipna=skipna,
numeric_only=numeric_only)
- @Substitution(name='mean', shortname='mean', na_action=_doc_exclude_na,
+ @Substitution(name='mean (optionally weighted)', shortname='mean',
+ na_action=_doc_exclude_na,
extras='')
@Appender(_stat_doc)
- def mean(self, axis=0, skipna=True, level=None):
+ def mean(self, axis=0, skipna=True, level=None, weights=None):
if level is not None:
+ if weights is not None:
+ raise NotImplementedError()
return self._agg_by_level('mean', axis=axis, level=level,
skipna=skipna)
- return self._reduce(nanops.nanmean, axis=axis, skipna=skipna,
- numeric_only=None)
+
+ def helper(values, axis, skipna, weights=None):
+ if weights is None:
+ return nanops.nanmean(values, axis=axis, skipna=skipna)
+
+ if isinstance(weights, (Series, Index)):
+ weights = weights.values
+ elif not isinstance(weights, np.ndarray):
+ weights = np.asarray(weights)
+
+ if com.is_datetime64_dtype(weights):
+ weights = weights.view('i8')
+ elif not com.is_integer_dtype(weights):
+ weights = com.ensure_float(weights)
+
+ if com.is_datetime64_dtype(values):
+ values = values.view('i8')
+ elif not com.is_integer_dtype(values):
+ values = com.ensure_float(values)
+
+ return nanops.weighted_nanmean(values, weights, axis=axis,
+ skipna=skipna)
+
+ return self._reduce(helper, axis=axis, skipna=skipna,
+ numeric_only=None, weights=weights)
@Substitution(name='minimum', shortname='min', na_action=_doc_exclude_na,
extras='')
@@ -4801,35 +4827,34 @@ def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwds):
return grouped.aggregate(applyf)
def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
- filter_type=None, **kwds):
+ filter_type=None, weights=None, **kwds):
f = lambda x: op(x, axis=axis, skipna=skipna, **kwds)
+
labels = self._get_agg_axis(axis)
+ if weights is not None:
+ weights = weights.reindex(self._get_axis(axis))
+ kwds['weights'] = weights
+
if numeric_only is None:
try:
values = self.values
result = f(values)
except Exception:
- if filter_type is None or filter_type == 'numeric':
- data = self._get_numeric_data()
- elif filter_type == 'bool':
- data = self._get_bool_data()
- else:
- raise NotImplementedError
+ data, labels = self._numeric_or_bool(filter_type, axis)
+ if weights is not None:
+ weights = weights.reindex(data._get_axis(axis))
+ kwds['weights'] = weights
+
result = f(data.values)
- labels = data._get_agg_axis(axis)
else:
+ data = self
if numeric_only:
- if filter_type is None or filter_type == 'numeric':
- data = self._get_numeric_data()
- elif filter_type == 'bool':
- data = self._get_bool_data()
- else:
- raise NotImplementedError
- values = data.values
- labels = data._get_agg_axis(axis)
- else:
- values = self.values
- result = f(values)
+ data, labels = self._numeric_or_bool(filter_type, axis)
+
+ if weights is not None:
+ weights = weights.reindex(data._get_axis(axis))
+ kwds['weights'] = weights
+ result = f(data.values)
if result.dtype == np.object_:
try:
@@ -4843,6 +4868,16 @@ def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
return Series(result, index=labels)
+ def _numeric_or_bool(self, filter_type, axis):
+ if filter_type is None or filter_type == 'numeric':
+ data = self._get_numeric_data()
+ elif filter_type == 'bool':
+ data = self._get_bool_data()
+ else:
+ raise NotImplementedError
+ labels = data._get_agg_axis(axis)
+ return data, labels
+
def idxmin(self, axis=0, skipna=True):
"""
Return index of first occurrence of minimum over requested axis.
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 9e11184cc4da7..a2f7a4ba4d126 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -117,6 +117,14 @@ def _nanmean(values, axis=None, skipna=True):
the_mean = the_sum / count if count > 0 else np.nan
return the_mean
+def weighted_nanmean(values, weights, axis=None, skipna=True):
+ if values.ndim == 1:
+ return algos.weighted_nanmean_1d(values, weights, skipna)
+ elif values.ndim == 2 and weights.ndim == 1:
+ return algos.weighted_nanmean_2d_1d_weights(values, weights, axis,
+ skipna)
+ else:
+ raise NotImplementedError()
def _nanmedian(values, axis=None, skipna=True):
def get_median(x):
@@ -490,7 +498,6 @@ def f(x, y):
naneq = make_nancomp(operator.eq)
nanne = make_nancomp(operator.ne)
-
def unique1d(values):
"""
Hash table-based unique
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 18ccf4b481777..4b391a423a64a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1299,10 +1299,43 @@ def duplicated(self, take_last=False):
return Series(duplicated, index=self.index, name=self.name)
sum = _make_stat_func(nanops.nansum, 'sum', 'sum')
- mean = _make_stat_func(nanops.nanmean, 'mean', 'mean')
median = _make_stat_func(nanops.nanmedian, 'median', 'median', extras='')
prod = _make_stat_func(nanops.nanprod, 'product', 'prod', extras='')
+ @Substitution(name='mean (optionally weighted)', shortname='mean',
+ na_action=_doc_exclude_na, extras='')
+ @Appender(_stat_doc)
+ def mean(self, axis=None, dtype=None, out=None, weights=None, skipna=True,
+ level=None):
+ if level is not None:
+
+ if weights is None:
+ return self._agg_by_level('mean', level=level, skipna=skipna)
+
+ #TODO weighted_mean_bin for performance
+ grouped = self.groupby(level=level)
+ if weights.index.nlevels < self.index.nlevels:
+ wgt_f = lambda x: weights.reindex(x.index, level=level)
+ else:
+ weights = weights.reindex(self.index)
+ wgt_f = lambda x: grouped.get_group(x.name, weights)
+
+ return grouped.aggregate(lambda x: x.mean(skipna=skipna,
+ weights=wgt_f(x)))
+
+
+ if weights is None:
+ return nanops.nanmean(self.values, skipna=skipna)
+
+ weights = _align_clean(self, weights, axis)
+ values = self.values
+ if com.is_datetime64_dtype(values):
+ values = values.view('i8')
+ elif not com.is_integer_dtype(values):
+ values = com.ensure_float(values)
+ return nanops.weighted_nanmean(values, weights, axis=axis,
+ skipna=skipna)
+
@Substitution(name='mean absolute deviation', shortname='mad',
na_action=_doc_exclude_na, extras='')
@Appender(_stat_doc)
@@ -3114,6 +3147,29 @@ def _get_fill_func(method):
fill_f = com.backfill_1d
return fill_f
+def _align_clean(data, weights, axis):
+ if isinstance(weights, Series):
+ if weights.index is data.index:
+ weights = weights.values
+ else:
+ weights = weights.reindex(data.index).values
+ elif isinstance(weights, Index):
+ weights = weights.values
+ elif len(weights) != len(data):
+ raise ValueError('ndarray weights must be same size as data')
+ elif isinstance(weights, (list, tuple)):
+ weights = com._asarray_tuplesafe(weights)
+
+ if weights.ndim > 1:
+ raise ValueError('Weights must be 1-dimensional')
+
+ if com.is_datetime64_dtype(weights):
+ weights = weights.view('i8')
+ elif not com.is_integer_dtype(weights):
+ weights = com.ensure_float(weights)
+
+ return weights
+
#----------------------------------------------------------------------
# Add plotting methods to Series
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index ba312f768bbee..227ac9f2faa53 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -6464,6 +6464,35 @@ def test_stat_operators_attempt_obj_array(self):
def test_mean(self):
self._check_stat_op('mean', np.mean)
+ def test_weighted_mean_1d_weights(self):
+ # axis = 0
+ weights = Series(np.random.randint(1, 100,
+ len(self.frame)).astype(float),
+ self.frame.index)
+ weights[np.random.randint(0, len(self.frame), 10)] = np.NaN
+
+ def wgt_mean(data, skipna=True):
+ # check the result is correct
+ wgts = weights.reindex(data.index)
+ if skipna:
+ mask = notnull(wgts) & notnull(data)
+ wgts = wgts[mask]
+ data = data[mask]
+ vals = np.asarray(wgts) * np.asarray(data)
+ return vals.sum() / np.asarray(wgts).sum()
+
+ self._check_stat_op('mean', wgt_mean, axis=0, weights=weights,
+ inject_skipna=True)
+
+ # axis = 1
+ weights = Series(np.random.randint(1, 100,
+ len(self.frame.columns)).astype(float),
+ self.frame.columns)
+ weights[3:4] = np.NaN
+
+ self._check_stat_op('mean', wgt_mean, axis=1, weights=weights,
+ inject_skipna=True)
+
def test_product(self):
self._check_stat_op('product', np.prod)
@@ -6589,7 +6618,7 @@ def alt(x):
assert_series_equal(df.kurt(), df.kurt(level=0).xs('bar'))
def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
- has_numeric_only=False):
+ has_numeric_only=False, axis=None, **kwargs):
if frame is None:
frame = self.frame
# set some NAs
@@ -6599,29 +6628,47 @@ def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
f = getattr(frame, name)
if has_skipna:
+ inject_arg = kwargs.pop('inject_skipna', False)
+
def skipna_wrapper(x):
nona = x.dropna().values
if len(nona) == 0:
return np.nan
- return alternative(nona)
+ if inject_arg:
+ return alternative(x, True)
+ else:
+ return alternative(nona)
def wrapper(x):
+ if inject_arg:
+ return alternative(x, False)
return alternative(x.values)
- result0 = f(axis=0, skipna=False)
- result1 = f(axis=1, skipna=False)
- assert_series_equal(result0, frame.apply(wrapper))
- assert_series_equal(result1, frame.apply(wrapper, axis=1),
- check_dtype=False) # HACK: win32
+ if axis is None:
+ result0 = f(axis=0, skipna=False, **kwargs)
+ result1 = f(axis=1, skipna=False, **kwargs)
+ assert_series_equal(result0, frame.apply(wrapper))
+ assert_series_equal(result1, frame.apply(wrapper, axis=1),
+ check_dtype=False) # HACK: win32
+ else:
+ result = f(axis=axis, skipna=False, **kwargs)
+ assert_series_equal(result, frame.apply(wrapper, axis=axis),
+ check_dtype=False)
+
else:
skipna_wrapper = alternative
wrapper = alternative
- result0 = f(axis=0)
- result1 = f(axis=1)
- assert_series_equal(result0, frame.apply(skipna_wrapper))
- assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1),
- check_dtype=False)
+ if axis is None:
+ result0 = f(axis=0, **kwargs)
+ result1 = f(axis=1, **kwargs)
+ assert_series_equal(result0, frame.apply(skipna_wrapper))
+ assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1),
+ check_dtype=False)
+ else:
+ result = f(axis=axis, **kwargs)
+ assert_series_equal(result, frame.apply(skipna_wrapper, axis=axis),
+ check_dtype=False)
# result = f(axis=1)
# comp = frame.apply(alternative, axis=1).reindex(result.index)
@@ -6630,22 +6677,39 @@ def wrapper(x):
self.assertRaises(Exception, f, axis=2)
# make sure works on mixed-type frame
- getattr(self.mixed_frame, name)(axis=0)
- getattr(self.mixed_frame, name)(axis=1)
+ if axis is None:
+ getattr(self.mixed_frame, name)(axis=0, **kwargs)
+ getattr(self.mixed_frame, name)(axis=1, **kwargs)
+ else:
+ getattr(self.mixed_frame, name)(axis=axis, **kwargs)
if has_numeric_only:
- getattr(self.mixed_frame, name)(axis=0, numeric_only=True)
- getattr(self.mixed_frame, name)(axis=1, numeric_only=True)
- getattr(self.frame, name)(axis=0, numeric_only=False)
- getattr(self.frame, name)(axis=1, numeric_only=False)
+ if axis is None:
+ getattr(self.mixed_frame, name)(axis=0, numeric_only=True,
+ **kwargs)
+ getattr(self.mixed_frame, name)(axis=1, numeric_only=True,
+ **kwargs)
+ getattr(self.frame, name)(axis=0, numeric_only=False,
+ **kwargs)
+ getattr(self.frame, name)(axis=1, numeric_only=False,
+ **kwargs)
+ else:
+ getattr(self.mixed_frame, name)(axis=axis, numeric_only=True,
+ **kwargs)
+ getattr(self.frame, name)(axis=axis, numeric_only=False,
+ **kwargs)
# all NA case
if has_skipna:
all_na = self.frame * np.NaN
- r0 = getattr(all_na, name)(axis=0)
- r1 = getattr(all_na, name)(axis=1)
- self.assert_(np.isnan(r0).all())
- self.assert_(np.isnan(r1).all())
+ if axis is None:
+ r0 = getattr(all_na, name)(axis=0, **kwargs)
+ r1 = getattr(all_na, name)(axis=1, **kwargs)
+ self.assert_(np.isnan(r0).all())
+ self.assert_(np.isnan(r1).all())
+ else:
+ rs = getattr(all_na, name)(axis=axis, **kwargs)
+ self.assert_(np.isnan(rs).all())
def test_sum_corner(self):
axis0 = self.empty.sum(0)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 52f25e5c69042..5496fdf4bd2ab 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1216,6 +1216,31 @@ def test_sum_inf(self):
def test_mean(self):
self._check_stat_op('mean', np.mean)
+ def test_weighted_mean(self):
+ weights = Series(np.random.randint(1, 100,
+ len(self.series)).astype(float),
+ self.series.index)
+ weights[np.random.randint(0, len(self.series), 10)] = np.NaN
+
+ def wgt_mean(data):
+ # check the result is correct
+ mask = notnull(weights).mul(notnull(data)).dropna().astype(bool)
+ vals = weights * data
+ wgts = weights.reindex(mask.index)
+ return vals.sum() / wgts[mask].sum()
+
+ self._check_stat_op('mean', wgt_mean, weights=weights)
+
+ def test_weighted_mean_dt64_weights(self):
+ # check date range as weights
+ self.series[5:15] = np.NaN
+ weights = bdate_range('1/1/2000', periods=len(self.series))
+ rs = self.series.mean(weights=weights)
+
+ weights = weights.asi8
+ xp = (weights * self.series).sum() / weights[notnull(self.series)].sum()
+ assert_almost_equal(rs, xp)
+
def test_median(self):
self._check_stat_op('median', np.median)
@@ -1316,7 +1341,7 @@ def test_npdiff(self):
r = np.diff(s)
assert_series_equal(Series([nan, 0, 0, 0, nan]), r)
- def _check_stat_op(self, name, alternate, check_objects=False):
+ def _check_stat_op(self, name, alternate, check_objects=False, **kwargs):
import pandas.core.nanops as nanops
def testit():
@@ -1326,15 +1351,15 @@ def testit():
self.series[5:15] = np.NaN
# skipna or no
- self.assert_(notnull(f(self.series)))
- self.assert_(isnull(f(self.series, skipna=False)))
+ self.assert_(notnull(f(self.series, **kwargs)))
+ self.assert_(isnull(f(self.series, skipna=False, **kwargs)))
# check the result is correct
nona = self.series.dropna()
- assert_almost_equal(f(nona), alternate(nona))
+ assert_almost_equal(f(nona, **kwargs), alternate(nona))
allna = self.series * nan
- self.assert_(np.isnan(f(allna)))
+ self.assert_(np.isnan(f(allna, **kwargs)))
# dtype=object with None, it works!
s = Series([1, 2, 3, None, 5])
@@ -1343,7 +1368,7 @@ def testit():
# check date range
if check_objects:
s = Series(bdate_range('1/1/2000', periods=10))
- res = f(s)
+ res = f(s, **kwargs)
exp = alternate(s)
self.assertEqual(res, exp)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2531 | 2012-12-14T03:15:44Z | 2014-03-09T15:06:03Z | null | 2014-06-26T17:08:34Z | |
DOC: add FAQ section on monkey-patching | diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 2a3620f8ae50c..16c7972bd3fb3 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -21,6 +21,47 @@ Frequently Asked Questions (FAQ)
import matplotlib.pyplot as plt
plt.close('all')
+.. _ref-monkey-patching:
+
+
+----------------------------------------------------
+
+Pandas is a powerful tool and already has a plethora of data manipulation
+operations implemented, most of them are very fast as well.
+It's very possible however that certain functionality that would make your
+life easier is missing. In that case you have several options:
+
+1) Open an issue on `Github <https://github.com/pydata/pandas/issues/>`_ , explain your need and the sort of functionality you would like to see implemented.
+2) Fork the repo, Implement the functionality yourself and open a PR
+ on Github.
+3) Write a method that performs the operation you are interested in and
+ Monkey-patch the pandas class as part of your IPython profile startup
+ or PYTHONSTARTUP file.
+
+ For example, here is an example of adding an ``just_foo_cols()``
+ method to the dataframe class:
+
+.. ipython:: python
+
+ import pandas as pd
+ def just_foo_cols(self):
+ """Get a list of column names containing the string 'foo'
+
+ """
+ return [x for x in self.columns if 'foo' in x]
+
+ pd.DataFrame.just_foo_cols = just_foo_cols # monkey-patch the DataFrame class
+ df = pd.DataFrame([range(4)],columns= ["A","foo","foozball","bar"])
+ df.just_foo_cols()
+ del pd.DataFrame.just_foo_cols # you can also remove the new method
+
+
+Monkey-patching is usually frowned upon because it makes your code
+less portable and can cause subtle bugs in some circumstances.
+Monkey-patching existing methods is usually a bad idea in that respect.
+When used with proper care, however, it's a very useful tool to have.
+
+
.. _ref-scikits-migration:
Migrating from scikits.timeseries to pandas >= 0.8.0
@@ -171,5 +212,3 @@ interval (``'start'`` or ``'end'``) convention:
data = Series(np.random.randn(50), index=rng)
resampled = data.resample('A', kind='timestamp', convention='end')
resampled.index
-
-
| Added the requisite caveats,
| https://api.github.com/repos/pandas-dev/pandas/pulls/2530 | 2012-12-14T02:58:38Z | 2012-12-14T03:45:44Z | null | 2014-06-26T11:13:16Z |
TST: refactoring to speed up test suite | diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index 788219be8bab6..10ebb6c4aa274 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -20,6 +20,8 @@
class TestMoments(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
_nan_locs = np.arange(20, 40)
_inf_locs = np.array([])
diff --git a/pandas/stats/tests/test_ols.py b/pandas/stats/tests/test_ols.py
index bfad17a18ffce..60ec676439db6 100644
--- a/pandas/stats/tests/test_ols.py
+++ b/pandas/stats/tests/test_ols.py
@@ -10,6 +10,7 @@
import unittest
import nose
import numpy as np
+from numpy.testing.decorators import slow
from pandas import date_range, bdate_range
from pandas.core.panel import Panel
@@ -52,6 +53,8 @@ def _compare_moving_ols(model1, model2):
class TestOLS(BaseTest):
+ _multiprocess_can_split_ = True
+
# TODO: Add tests for OLS y predict
# TODO: Right now we just check for consistency between full-sample and
# rolling/expanding results of the panel OLS. We should also cross-check
@@ -69,12 +72,18 @@ def setUpClass(cls):
if not _have_statsmodels:
raise nose.SkipTest
- def testOLSWithDatasets(self):
+ def testOLSWithDatasets_ccard(self):
self.checkDataSet(sm.datasets.ccard.load(), skip_moving=True)
self.checkDataSet(sm.datasets.cpunish.load(), skip_moving=True)
self.checkDataSet(sm.datasets.longley.load(), skip_moving=True)
self.checkDataSet(sm.datasets.stackloss.load(), skip_moving=True)
+
+ @slow
+ def testOLSWithDatasets_copper(self):
self.checkDataSet(sm.datasets.copper.load())
+
+ @slow
+ def testOLSWithDatasets_scotland(self):
self.checkDataSet(sm.datasets.scotland.load())
# degenerate case fails on some platforms
@@ -233,6 +242,9 @@ def test_ols_object_dtype(self):
summary = repr(model)
class TestOLSMisc(unittest.TestCase):
+
+ _multiprocess_can_split_ = True
+
'''
For test coverage with faux data
'''
@@ -446,6 +458,8 @@ def test_columns_tuples_summary(self):
class TestPanelOLS(BaseTest):
+ _multiprocess_can_split_ = True
+
FIELDS = ['beta', 'df', 'df_model', 'df_resid', 'f_stat',
'p_value', 'r2', 'r2_adj', 'rmse', 'std_err',
't_stat', 'var_beta']
@@ -765,6 +779,8 @@ def _period_slice(panelModel, i):
class TestOLSFilter(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
date_index = date_range(datetime(2009, 12, 11), periods=3,
freq=datetools.bday)
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index d252b8eb640b8..b60c1d4407277 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -1738,12 +1738,12 @@ def test_column_select_via_attr(self):
assert_frame_equal(result, expected)
def test_rank_apply(self):
- lev1 = np.array([rands(10) for _ in xrange(1000)], dtype=object)
+ lev1 = np.array([rands(10) for _ in xrange(100)], dtype=object)
lev2 = np.array([rands(10) for _ in xrange(130)], dtype=object)
- lab1 = np.random.randint(0, 1000, size=5000)
- lab2 = np.random.randint(0, 130, size=5000)
+ lab1 = np.random.randint(0, 100, size=500)
+ lab2 = np.random.randint(0, 130, size=500)
- df = DataFrame({'value' : np.random.randn(5000),
+ df = DataFrame({'value' : np.random.randn(500),
'key1' : lev1.take(lab1),
'key2' : lev2.take(lab2)})
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 39d7d5fd08245..088e9a9fa137c 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -29,13 +29,6 @@ def add_nans(panel4d):
panel = panel4d[label]
tm.add_nans(panel)
-def _skip_if_no_scipy():
- try:
- import scipy.stats
- except ImportError:
- raise nose.SkipTest
-
-
class SafeForLongAndSparse(object):
_multiprocess_can_split_ = True
@@ -74,8 +67,11 @@ def test_max(self):
self._check_stat_op('max', np.max)
def test_skew(self):
- _skip_if_no_scipy()
- from scipy.stats import skew
+ try:
+ from scipy.stats import skew
+ except ImportError:
+ raise nose.SkipTest
+
def this_skew(x):
if len(x) < 3:
return np.nan
@@ -541,7 +537,8 @@ def test_set_value(self):
res3 = self.panel4d.set_value('l4', 'ItemE', 'foobar', 'baz', 5)
self.assert_(com.is_float_dtype(res3['l4'].values))
-class TestPanel4d(unittest.TestCase, CheckIndexing, SafeForSparse, SafeForLongAndSparse):
+class TestPanel4d(unittest.TestCase, CheckIndexing, SafeForSparse,
+ SafeForLongAndSparse):
_multiprocess_can_split_ = True
@@ -550,7 +547,7 @@ def assert_panel4d_equal(cls,x, y):
assert_panel4d_equal(x, y)
def setUp(self):
- self.panel4d = tm.makePanel4D()
+ self.panel4d = tm.makePanel4D(nper=8)
add_nans(self.panel4d)
def test_constructor(self):
@@ -900,7 +897,7 @@ def test_update(self):
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]]])
-
+
other = Panel4D([[[[3.6, 2., np.nan]],
[[np.nan, np.nan, 7]]]])
@@ -914,7 +911,7 @@ def test_update(self):
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]]])
-
+
assert_panel4d_equal(p4d, expected)
def test_filter(self):
@@ -1053,5 +1050,6 @@ def test_to_excel(self):
if __name__ == '__main__':
import nose
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure',
+ '--with-timer'],
exit=False)
diff --git a/pandas/tests/test_panelnd.py b/pandas/tests/test_panelnd.py
index 6cf0afe11c7e2..1debfd54aac3c 100644
--- a/pandas/tests/test_panelnd.py
+++ b/pandas/tests/test_panelnd.py
@@ -27,73 +27,83 @@ def test_4d_construction(self):
# create a 4D
Panel4D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel4D',
- axis_orders = ['labels','items','major_axis','minor_axis'],
- axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ klass_name = 'Panel4D',
+ axis_orders = ['labels','items','major_axis','minor_axis'],
+ axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis',
+ 'minor_axis' : 'minor_axis' },
slicer = Panel,
axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
stat_axis = 2)
-
+
p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
def test_4d_construction_alt(self):
# create a 4D
Panel4D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel4D',
- axis_orders = ['labels','items','major_axis','minor_axis'],
- axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ klass_name = 'Panel4D',
+ axis_orders = ['labels','items','major_axis','minor_axis'],
+ axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis',
+ 'minor_axis' : 'minor_axis' },
slicer = 'Panel',
axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
stat_axis = 2)
-
+
p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
def test_4d_construction_error(self):
# create a 4D
- self.assertRaises(Exception,
+ self.assertRaises(Exception,
panelnd.create_nd_panel_factory,
- klass_name = 'Panel4D',
- axis_orders = ['labels','items','major_axis','minor_axis'],
- axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ klass_name = 'Panel4D',
+ axis_orders = ['labels', 'items', 'major_axis',
+ 'minor_axis'],
+ axis_slices = { 'items' : 'items',
+ 'major_axis' : 'major_axis',
+ 'minor_axis' : 'minor_axis' },
slicer = 'foo',
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ axis_aliases = { 'major' : 'major_axis',
+ 'minor' : 'minor_axis' },
stat_axis = 2)
-
+
def test_5d_construction(self):
# create a 4D
Panel4D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel4D',
- axis_orders = ['labels1','items','major_axis','minor_axis'],
- axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ klass_name = 'Panel4D',
+ axis_orders = ['labels1','items','major_axis','minor_axis'],
+ axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis',
+ 'minor_axis' : 'minor_axis' },
slicer = Panel,
axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
stat_axis = 2)
-
+
p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
# create a 5D
Panel5D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel5D',
- axis_orders = [ 'cool1', 'labels1','items','major_axis','minor_axis'],
- axis_slices = { 'labels1' : 'labels1', 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ klass_name = 'Panel5D',
+ axis_orders = [ 'cool1', 'labels1', 'items', 'major_axis',
+ 'minor_axis'],
+ axis_slices = { 'labels1' : 'labels1', 'items' : 'items',
+ 'major_axis' : 'major_axis',
+ 'minor_axis' : 'minor_axis' },
slicer = Panel4D,
axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
stat_axis = 2)
-
+
p5d = Panel5D(dict(C1 = p4d))
# slice back to 4d
results = p5d.ix['C1',:,:,0:3,:]
expected = p4d.ix[:,:,0:3,:]
assert_panel_equal(results['L1'], expected['L1'])
-
+
# test a transpose
#results = p5d.transpose(1,2,3,4,0)
- #expected =
+ #expected =
if __name__ == '__main__':
import nose
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 547a4ec842729..5df54537c6210 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -36,6 +36,8 @@ def get_test_data(ngroups=NGROUPS, n=N):
class TestMerge(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
# aggregate multiple columns
self.df = DataFrame({'key1': get_test_data(),
@@ -923,6 +925,8 @@ def _join_by_hand(a, b, how='left'):
class TestConcatenate(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.frame = DataFrame(tm.getSeriesData())
self.mixed_frame = self.frame.copy()
diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index 423fae8eb3180..133f6757f0a8d 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -9,6 +9,8 @@
class TestPivotTable(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.data = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
'bar', 'bar', 'bar', 'bar',
@@ -322,5 +324,3 @@ def test_crosstab_pass_values(self):
import nose
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
-
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 0ccc5fb3c01f5..0722e4368ea6d 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -22,7 +22,6 @@
bday = BDay()
-
def _skip_if_no_pytz():
try:
import pytz
@@ -450,7 +449,7 @@ def test_monthly_resample_error(self):
def test_resample_anchored_intraday(self):
# #1471, #1458
- rng = date_range('1/1/2012', '4/1/2012', freq='10min')
+ rng = date_range('1/1/2012', '4/1/2012', freq='100min')
df = DataFrame(rng.month, index=rng)
result = df.resample('M')
@@ -463,7 +462,7 @@ def test_resample_anchored_intraday(self):
tm.assert_frame_equal(result, exp)
- rng = date_range('1/1/2012', '4/1/2013', freq='10min')
+ rng = date_range('1/1/2012', '4/1/2012', freq='100min')
df = DataFrame(rng.month, index=rng)
result = df.resample('Q')
@@ -590,7 +589,55 @@ def _simple_pts(start, end, freq='D'):
class TestResamplePeriodIndex(unittest.TestCase):
+
_multiprocess_can_split_ = True
+
+ def test_annual_upsample_D_s_f(self):
+ self._check_annual_upsample_cases('D', 'start', 'ffill')
+
+ def test_annual_upsample_D_e_f(self):
+ self._check_annual_upsample_cases('D', 'end', 'ffill')
+
+ def test_annual_upsample_D_s_b(self):
+ self._check_annual_upsample_cases('D', 'start', 'bfill')
+
+ def test_annual_upsample_D_e_b(self):
+ self._check_annual_upsample_cases('D', 'end', 'bfill')
+
+ def test_annual_upsample_B_s_f(self):
+ self._check_annual_upsample_cases('B', 'start', 'ffill')
+
+ def test_annual_upsample_B_e_f(self):
+ self._check_annual_upsample_cases('B', 'end', 'ffill')
+
+ def test_annual_upsample_B_s_b(self):
+ self._check_annual_upsample_cases('B', 'start', 'bfill')
+
+ def test_annual_upsample_B_e_b(self):
+ self._check_annual_upsample_cases('B', 'end', 'bfill')
+
+ def test_annual_upsample_M_s_f(self):
+ self._check_annual_upsample_cases('M', 'start', 'ffill')
+
+ def test_annual_upsample_M_e_f(self):
+ self._check_annual_upsample_cases('M', 'end', 'ffill')
+
+ def test_annual_upsample_M_s_b(self):
+ self._check_annual_upsample_cases('M', 'start', 'bfill')
+
+ def test_annual_upsample_M_e_b(self):
+ self._check_annual_upsample_cases('M', 'end', 'bfill')
+
+ def _check_annual_upsample_cases(self, targ, conv, meth, end='12/31/1991'):
+ for month in MONTHS:
+ ts = _simple_pts('1/1/1990', end, freq='A-%s' % month)
+
+ result = ts.resample(targ, fill_method=meth,
+ convention=conv)
+ expected = result.to_timestamp(targ, how=conv)
+ expected = expected.asfreq(targ, meth).to_period()
+ assert_series_equal(result, expected)
+
def test_basic_downsample(self):
ts = _simple_pts('1/1/1990', '6/30/1995', freq='M')
result = ts.resample('a-dec')
@@ -634,25 +681,12 @@ def test_upsample_with_limit(self):
assert_series_equal(result, expected)
def test_annual_upsample(self):
- targets = ['D', 'B', 'M']
-
- for month in MONTHS:
- ts = _simple_pts('1/1/1990', '12/31/1995', freq='A-%s' % month)
-
- for targ, conv, meth in product(targets, ['start', 'end'],
- ['ffill', 'bfill']):
- result = ts.resample(targ, fill_method=meth,
- convention=conv)
- expected = result.to_timestamp(targ, how=conv)
- expected = expected.asfreq(targ, meth).to_period()
- assert_series_equal(result, expected)
-
+ ts = _simple_pts('1/1/1990', '12/31/1995', freq='A-DEC')
df = DataFrame({'a' : ts})
rdf = df.resample('D', fill_method='ffill')
exp = df['a'].resample('D', fill_method='ffill')
assert_series_equal(rdf['a'], exp)
-
rng = period_range('2000', '2003', freq='A-DEC')
ts = Series([1, 2, 3, 4], index=rng)
@@ -979,7 +1013,7 @@ def test_apply_iteration(self):
result = grouped.apply(f)
self.assertTrue(result.index.equals(df.index))
-
if __name__ == '__main__':
- nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure',
+ '--with-timer'],
exit=False)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index aeef7fc3594e1..45022353c1ccd 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -309,12 +309,12 @@ def makePeriodSeries(nper=None):
return Series(randn(nper), index=makePeriodIndex(nper))
-def getTimeSeriesData():
- return dict((c, makeTimeSeries()) for c in getCols(K))
+def getTimeSeriesData(nper=None):
+ return dict((c, makeTimeSeries(nper)) for c in getCols(K))
-def makeTimeDataFrame():
- data = getTimeSeriesData()
+def makeTimeDataFrame(nper=None):
+ data = getTimeSeriesData(nper)
return DataFrame(data)
@@ -327,13 +327,14 @@ def makePeriodFrame():
return DataFrame(data)
-def makePanel():
+def makePanel(nper=None):
cols = ['Item' + c for c in string.ascii_uppercase[:K - 1]]
- data = dict((c, makeTimeDataFrame()) for c in cols)
+ data = dict((c, makeTimeDataFrame(nper)) for c in cols)
return Panel.fromDict(data)
-def makePanel4D():
- return Panel4D(dict(l1 = makePanel(), l2 = makePanel(), l3 = makePanel()))
+def makePanel4D(nper=None):
+ return Panel4D(dict(l1 = makePanel(nper), l2 = makePanel(nper),
+ l3 = makePanel(nper)))
def makeCustomIndex(nentries, nlevels, prefix='#', names=False, ndupe_l=None,
idx_type=None):
| also marked a few tests as slow
| https://api.github.com/repos/pandas-dev/pandas/pulls/2529 | 2012-12-13T21:57:31Z | 2012-12-13T22:47:43Z | null | 2014-06-27T11:27:29Z |
ENH: vbench support for HDFStore | diff --git a/vb_suite/hdfstore_bench.py b/vb_suite/hdfstore_bench.py
new file mode 100644
index 0000000000000..d43d8b60a9cf0
--- /dev/null
+++ b/vb_suite/hdfstore_bench.py
@@ -0,0 +1,222 @@
+from vbench.api import Benchmark
+from datetime import datetime
+
+start_date = datetime(2012,7,1)
+
+common_setup = """from pandas_vb_common import *
+import os
+
+f = '__test__.h5'
+def remove(f):
+ try:
+ os.remove(f)
+ except:
+ pass
+
+"""
+
+#----------------------------------------------------------------------
+# get from a store
+
+setup1 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000)},
+ index=index)
+remove(f)
+store = HDFStore(f)
+store.put('df1',df)
+"""
+
+read_store = Benchmark("store.get('df1')", setup1, cleanup = "store.close()",
+ start_date=start_date)
+
+
+#----------------------------------------------------------------------
+# write to a store
+
+setup2 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000)},
+ index=index)
+remove(f)
+store = HDFStore(f)
+"""
+
+write_store = Benchmark("store.put('df2',df)", setup2, cleanup = "store.close()",
+ start_date=start_date)
+
+#----------------------------------------------------------------------
+# get from a store (mixed)
+
+setup3 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000),
+ 'string1' : ['foo'] * 100000,
+ 'bool1' : [True] * 100000,
+ 'int1' : np.random.randint(0, 1000000, size=100000)},
+ index=index)
+remove(f)
+store = HDFStore(f)
+store.put('df3',df)
+"""
+
+read_store_mixed = Benchmark("store.get('df3')", setup3, cleanup = "store.close()",
+ start_date=start_date)
+
+
+#----------------------------------------------------------------------
+# write to a store (mixed)
+
+setup4 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000),
+ 'string1' : ['foo'] * 100000,
+ 'bool1' : [True] * 100000,
+ 'int1' : np.random.randint(0, 1000000, size=100000)},
+ index=index)
+remove(f)
+store = HDFStore(f)
+"""
+
+write_store_mixed = Benchmark("store.put('df4',df)", setup4, cleanup = "store.close()",
+ start_date=start_date)
+
+#----------------------------------------------------------------------
+# get from a table (mixed)
+
+setup5 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000),
+ 'string1' : ['foo'] * 100000,
+ 'bool1' : [True] * 100000,
+ 'int1' : np.random.randint(0, 1000000, size=100000)},
+ index=index)
+
+remove(f)
+store = HDFStore(f)
+store.append('df5',df)
+"""
+
+read_store_table_mixed = Benchmark("store.select('df5')", setup5, cleanup = "store.close()",
+ start_date=start_date)
+
+
+#----------------------------------------------------------------------
+# write to a table (mixed)
+
+setup6 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000),
+ 'string1' : ['foo'] * 100000,
+ 'bool1' : [True] * 100000,
+ 'int1' : np.random.randint(0, 100000, size=100000)},
+ index=index)
+remove(f)
+store = HDFStore(f)
+"""
+
+write_store_table_mixed = Benchmark("store.append('df6',df)", setup6, cleanup = "store.close()",
+ start_date=start_date)
+
+#----------------------------------------------------------------------
+# select from a table
+
+setup7 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000) },
+ index=index)
+
+remove(f)
+store = HDFStore(f)
+store.append('df7',df)
+"""
+
+read_store_table = Benchmark("store.select('df7')", setup7, cleanup = "store.close()",
+ start_date=start_date)
+
+
+#----------------------------------------------------------------------
+# write to a table
+
+setup8 = common_setup + """
+index = [rands(10) for _ in xrange(100000)]
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000) },
+ index=index)
+remove(f)
+store = HDFStore(f)
+"""
+
+write_store_table = Benchmark("store.append('df8',df)", setup8, cleanup = "store.close()",
+ start_date=start_date)
+
+#----------------------------------------------------------------------
+# get from a table (wide)
+
+setup9 = common_setup + """
+df = DataFrame(np.random.randn(100000,200))
+
+remove(f)
+store = HDFStore(f)
+store.append('df9',df)
+"""
+
+read_store_table_wide = Benchmark("store.select('df9')", setup9, cleanup = "store.close()",
+ start_date=start_date)
+
+
+#----------------------------------------------------------------------
+# write to a table (wide)
+
+setup10 = common_setup + """
+df = DataFrame(np.random.randn(100000,200))
+
+remove(f)
+store = HDFStore(f)
+"""
+
+write_store_table_wide = Benchmark("store.append('df10',df)", setup10, cleanup = "store.close()",
+ start_date=start_date)
+
+#----------------------------------------------------------------------
+# get from a table (wide) (indexed)
+
+setup11 = common_setup + """
+index = date_range('1/1/2000', periods = 100000)
+df = DataFrame(np.random.randn(100000,200), index = index)
+
+remove(f)
+store = HDFStore(f)
+store.append('df11',df)
+store.create_table_index('df11')
+"""
+
+query_store_table_wide = Benchmark("store.select('df11', [ ('index', '>', df.index[10000]), ('index', '<', df.index[15000]) ])", setup11, cleanup = "store.close()",
+ start_date=start_date)
+
+
+#----------------------------------------------------------------------
+# query from a table (indexed)
+
+setup12 = common_setup + """
+index = date_range('1/1/2000', periods = 100000)
+df = DataFrame({'float1' : randn(100000),
+ 'float2' : randn(100000) },
+ index=index)
+
+remove(f)
+store = HDFStore(f)
+store.append('df12',df)
+store.create_table_index('df12')
+"""
+
+query_store_table = Benchmark("store.select('df12', [ ('index', '>', df.index[10000]), ('index', '<', df.index[15000]) ])", setup12, cleanup = "store.close()",
+ start_date=start_date)
+
diff --git a/vb_suite/suite.py b/vb_suite/suite.py
index 4d38388548984..e44eda802dc71 100644
--- a/vb_suite/suite.py
+++ b/vb_suite/suite.py
@@ -12,6 +12,7 @@
'index_object',
'indexing',
'io_bench',
+ 'hdfstore_bench',
'join_merge',
'miscellaneous',
'panel_ctor',
| added benchmarcks to compare (100,000) rows:
- read/write store
- read/write store mixed
- read/write table
- read/write table wide (200 columns)
- read/write table mixed
- query wide/table
| https://api.github.com/repos/pandas-dev/pandas/pulls/2528 | 2012-12-13T21:55:21Z | 2012-12-13T22:52:58Z | null | 2014-06-26T11:13:06Z |
A typo for a DataFrame method | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index cba175025848f..7d89ede62f196 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -444,7 +444,7 @@ maximum value for each column occurred:
tsdf = DataFrame(randn(1000, 3), columns=['A', 'B', 'C'],
index=date_range('1/1/2000', periods=1000))
- tsdf.apply(lambda x: x.index[x.dropna().argmax()])
+ tsdf.apply(lambda x: x.index[x.dropna().idxmax()])
You may also pass additional arguments and keyword arguments to the ``apply``
method. For instance, consider the following function you would like to apply:
| Hi there,
I was reading this useful and very good doc. I saw an 'argmax' instead of 'idxmax' in the 'Essential basic functionality' part.
See you.
Damien
| https://api.github.com/repos/pandas-dev/pandas/pulls/2522 | 2012-12-13T17:58:52Z | 2012-12-13T18:13:17Z | null | 2013-07-13T19:57:54Z |
Fixed DataFrame.any() docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 3f555c800d1eb..b73d411354989 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4546,7 +4546,7 @@ def any(self, axis=0, bool_only=None, skipna=True, level=None):
def all(self, axis=0, bool_only=None, skipna=True, level=None):
"""
- Return whether any element is True over requested axis.
+ Return whether all elements are True over requested axis.
%(na_action)s
Parameters
| Used to be the same as DataFrame.all
| https://api.github.com/repos/pandas-dev/pandas/pulls/2519 | 2012-12-13T15:12:09Z | 2012-12-13T15:33:21Z | 2012-12-13T15:33:21Z | 2013-08-20T22:01:02Z |
BUG: df.from_dict should respect OrderedDict | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 95b5f397ae1be..f6893e1b6ac3c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -30,6 +30,7 @@
from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.series import Series, _radd_compat, _dtype_from_scalar
from pandas.compat.scipy import scoreatpercentile as _quantile
+from pandas.util.compat import OrderedDict
from pandas.util import py3compat
from pandas.util.terminal import get_terminal_size
from pandas.util.decorators import deprecate, Appender, Substitution
@@ -468,7 +469,6 @@ def _init_dict(self, data, index, columns, dtype=None):
Segregate Series based on type and coerce into matrices.
Needs to handle a lot of exceptional cases.
"""
- from pandas.util.compat import OrderedDict
if dtype is not None:
dtype = np.dtype(dtype)
@@ -883,14 +883,15 @@ def from_dict(cls, data, orient='columns', dtype=None):
-------
DataFrame
"""
- from collections import defaultdict
+ from collections import OrderedDict
orient = orient.lower()
if orient == 'index':
# TODO: this should be seriously cythonized
- new_data = defaultdict(dict)
+ new_data = OrderedDict()
for index, s in data.iteritems():
for col, v in s.iteritems():
+ new_data[col]= new_data.get(col,OrderedDict())
new_data[col][index] = v
data = new_data
elif orient != 'columns': # pragma: no cover
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index ba312f768bbee..ce242758bf3bf 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -28,6 +28,7 @@
assert_series_equal,
assert_frame_equal)
from pandas.util import py3compat
+from pandas.util.compat import OrderedDict
import pandas.util.testing as tm
import pandas.lib as lib
@@ -1799,7 +1800,6 @@ def test_is_mixed_type(self):
self.assert_(self.mixed_frame._is_mixed_type)
def test_constructor_ordereddict(self):
- from pandas.util.compat import OrderedDict
import random
nitems = 100
nums = range(nitems)
@@ -2196,12 +2196,12 @@ def test_constructor_list_of_lists(self):
self.assert_(df['str'].dtype == np.object_)
def test_constructor_list_of_dicts(self):
- data = [{'a': 1.5, 'b': 3, 'c':4, 'd':6},
- {'a': 1.5, 'b': 3, 'd':6},
- {'a': 1.5, 'd':6},
- {},
- {'a': 1.5, 'b': 3, 'c':4},
- {'b': 3, 'c':4, 'd':6}]
+ data = [OrderedDict([['a', 1.5], ['b', 3], ['c',4], ['d',6]]),
+ OrderedDict([['a', 1.5], ['b', 3], ['d',6]]),
+ OrderedDict([['a', 1.5],[ 'd',6]]),
+ OrderedDict(),
+ OrderedDict([['a', 1.5], ['b', 3],[ 'c',4]]),
+ OrderedDict([['b', 3], ['c',4], ['d',6]])]
result = DataFrame(data)
expected = DataFrame.from_dict(dict(zip(range(len(data)), data)),
@@ -2213,9 +2213,9 @@ def test_constructor_list_of_dicts(self):
assert_frame_equal(result, expected)
def test_constructor_list_of_series(self):
- data = [{'a': 1.5, 'b': 3.0, 'c':4.0},
- {'a': 1.5, 'b': 3.0, 'c':6.0}]
- sdict = dict(zip(['x', 'y'], data))
+ data = [OrderedDict([['a', 1.5],[ 'b', 3.0],[ 'c',4.0]]),
+ OrderedDict([['a', 1.5], ['b', 3.0],[ 'c',6.0]])]
+ sdict = OrderedDict(zip(['x', 'y'], data))
idx = Index(['a', 'b', 'c'])
# all named
@@ -2230,21 +2230,21 @@ def test_constructor_list_of_series(self):
Series([1.5, 3, 6], idx)]
result = DataFrame(data2)
- sdict = dict(zip(['x', 'Unnamed 0'], data))
+ sdict = OrderedDict(zip(['x', 'Unnamed 0'], data))
expected = DataFrame.from_dict(sdict, orient='index')
assert_frame_equal(result.sort_index(), expected)
# none named
- data = [{'a': 1.5, 'b': 3, 'c':4, 'd':6},
- {'a': 1.5, 'b': 3, 'd':6},
- {'a': 1.5, 'd':6},
- {},
- {'a': 1.5, 'b': 3, 'c':4},
- {'b': 3, 'c':4, 'd':6}]
+ data = [OrderedDict([['a', 1.5], ['b', 3], ['c',4], ['d',6]]),
+ OrderedDict([['a', 1.5], ['b', 3], ['d',6]]),
+ OrderedDict([['a', 1.5],[ 'd',6]]),
+ OrderedDict(),
+ OrderedDict([['a', 1.5], ['b', 3],[ 'c',4]]),
+ OrderedDict([['b', 3], ['c',4], ['d',6]])]
data = [Series(d) for d in data]
result = DataFrame(data)
- sdict = dict(zip(range(len(data)), data))
+ sdict = OrderedDict(zip(range(len(data)), data))
expected = DataFrame.from_dict(sdict, orient='index')
assert_frame_equal(result, expected.reindex(result.index))
@@ -2255,9 +2255,10 @@ def test_constructor_list_of_series(self):
expected = DataFrame(index=[0])
assert_frame_equal(result, expected)
- data = [{'a': 1.5, 'b': 3.0, 'c':4.0},
- {'a': 1.5, 'b': 3.0, 'c':6.0}]
- sdict = dict(zip(range(len(data)), data))
+ data = [OrderedDict([['a', 1.5],[ 'b', 3.0],[ 'c',4.0]]),
+ OrderedDict([['a', 1.5], ['b', 3.0],[ 'c',6.0]])]
+ sdict = OrderedDict(zip(range(len(data)), data))
+
idx = Index(['a', 'b', 'c'])
data2 = [Series([1.5, 3, 4], idx, dtype='O'),
Series([1.5, 3, 6], idx)]
| related #2517
| https://api.github.com/repos/pandas-dev/pandas/pulls/2518 | 2012-12-13T10:43:30Z | 2012-12-13T22:59:36Z | null | 2014-06-19T10:31:02Z |
add option_context context manager | diff --git a/RELEASE.rst b/RELEASE.rst
index 26f3265029696..890425b729324 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -99,7 +99,7 @@ pandas 0.10.0
instead. This is a legacy hack and can lead to subtle bugs.
- inf/-inf are no longer considered as NA by isnull/notnull. To be clear, this
is legacy cruft from early pandas. This behavior can be globally re-enabled
- using pandas.core.common.use_inf_as_na (#2050, #1919)
+ using the new option ``mode.use_inf_as_null`` (#2050, #1919)
- ``pandas.merge`` will now default to ``sort=False``. For many use cases
sorting the join keys is not necessary, and doing it by default is wasteful
- ``names`` handling in file parsing: if explicit column `names` passed,
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index f514139a9170f..b8f3468f82098 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -61,7 +61,7 @@ arise and we wish to also consider that "missing" or "null".
Until recently, for legacy reasons ``inf`` and ``-inf`` were also
considered to be "null" in computations. This is no longer the case by
-default; use the :func: `~pandas.core.common.use_inf_as_null` function to recover it.
+default; use the ``mode.use_inf_as_null`` option to recover it.
.. _missing.isnull:
diff --git a/pandas/core/common.py b/pandas/core/common.py
index d77d413d39571..386343dc38704 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -94,8 +94,8 @@ def isnull_old(obj):
else:
return obj is None
-def use_inf_as_null(flag):
- '''
+def _use_inf_as_null(key):
+ '''Option change callback for null/inf behaviour
Choose which replacement for numpy.isnan / -numpy.isfinite is used.
Parameters
@@ -113,6 +113,7 @@ def use_inf_as_null(flag):
* http://stackoverflow.com/questions/4859217/
programmatically-creating-variables-in-python/4859312#4859312
'''
+ flag = get_option(key)
if flag == True:
globals()['isnull'] = isnull_old
else:
@@ -1179,7 +1180,7 @@ def in_interactive_session():
returns True if running under python/ipython interactive shell
"""
import __main__ as main
- return not hasattr(main, '__file__') or get_option('test.interactive')
+ return not hasattr(main, '__file__') or get_option('mode.sim_interactive')
def in_qtconsole():
"""
diff --git a/pandas/core/config.py b/pandas/core/config.py
index 9bd62c38fa8e5..87f0b205058cd 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -25,7 +25,9 @@
- all options in a certain sub - namespace can be reset at once.
- the user can set / get / reset or ask for the description of an option.
- a developer can register and mark an option as deprecated.
-
+- you can register a callback to be invoked when the the option value
+ is set or reset. Changing the stored value is considered misuse, but
+ is not verboten.
Implementation
==============
@@ -54,7 +56,7 @@
import warnings
DeprecatedOption = namedtuple('DeprecatedOption', 'key msg rkey removal_ver')
-RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator')
+RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator cb')
_deprecated_options = {} # holds deprecated option metdata
_registered_options = {} # holds registered option metdata
@@ -65,37 +67,33 @@
##########################################
# User API
-def _get_option(pat):
+def _get_single_key(pat, silent):
keys = _select_options(pat)
if len(keys) == 0:
- _warn_if_deprecated(pat)
+ if not silent:
+ _warn_if_deprecated(pat)
raise KeyError('No such keys(s)')
if len(keys) > 1:
raise KeyError('Pattern matched multiple keys')
key = keys[0]
- _warn_if_deprecated(key)
+ if not silent:
+ _warn_if_deprecated(key)
key = _translate_key(key)
+ return key
+
+def _get_option(pat, silent=False):
+ key = _get_single_key(pat,silent)
+
# walk the nested dict
root, k = _get_root(key)
-
return root[k]
-def _set_option(pat, value):
-
- keys = _select_options(pat)
- if len(keys) == 0:
- _warn_if_deprecated(pat)
- raise KeyError('No such keys(s)')
- if len(keys) > 1:
- raise KeyError('Pattern matched multiple keys')
- key = keys[0]
-
- _warn_if_deprecated(key)
- key = _translate_key(key)
+def _set_option(pat, value, silent=False):
+ key = _get_single_key(pat,silent)
o = _get_registered_option(key)
if o and o.validator:
@@ -105,6 +103,9 @@ def _set_option(pat, value):
root, k = _get_root(key)
root[k] = value
+ if o.cb:
+ o.cb(key)
+
def _describe_option(pat='', _print_desc=True):
@@ -270,7 +271,29 @@ def __doc__(self):
######################################################
# Functions for use by pandas developers, in addition to User - api
-def register_option(key, defval, doc='', validator=None):
+class option_context(object):
+ def __init__(self,*args):
+ assert len(args) % 2 == 0 and len(args)>=2, \
+ "Need to invoke as option_context(pat,val,[(pat,val),..))."
+ ops = zip(args[::2],args[1::2])
+ undo=[]
+ for pat,val in ops:
+ undo.append((pat,_get_option(pat,silent=True)))
+
+ self.undo = undo
+
+ for pat,val in ops:
+ _set_option(pat,val,silent=True)
+
+ def __enter__(self):
+ pass
+
+ def __exit__(self, *args):
+ if self.undo:
+ for pat, val in self.undo:
+ _set_option(pat, val)
+
+def register_option(key, defval, doc='', validator=None, cb=None):
"""Register an option in the package-wide pandas config object
Parameters
@@ -280,6 +303,9 @@ def register_option(key, defval, doc='', validator=None):
doc - a string description of the option
validator - a function of a single argument, should raise `ValueError` if
called with a value which is not a legal value for the option.
+ cb - a function of a single argument "key", which is called
+ immediately after an option value is set/reset. key is
+ the full name of the option.
Returns
-------
@@ -321,7 +347,7 @@ def register_option(key, defval, doc='', validator=None):
# save the option metadata
_registered_options[key] = RegisteredOption(key=key, defval=defval,
- doc=doc, validator=validator)
+ doc=doc, validator=validator,cb=cb)
def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index a854c012226cf..b2443e5b0b14e 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -139,10 +139,27 @@
cf.register_option('expand_frame_repr', True, pc_expand_repr_doc)
cf.register_option('line_width', 80, pc_line_width_doc)
-tc_interactive_doc="""
+tc_sim_interactive_doc="""
: boolean
Default False
Whether to simulate interactive mode for purposes of testing
"""
-with cf.config_prefix('test'):
- cf.register_option('interactive', False, tc_interactive_doc)
+with cf.config_prefix('mode'):
+ cf.register_option('sim_interactive', False, tc_sim_interactive_doc)
+
+use_inf_as_null_doc="""
+: boolean
+ True means treat None, NaN, INF, -INF as null (old way),
+ False means None and NaN are null, but INF, -INF are not null
+ (new way).
+"""
+
+# we don't want to start importing evrything at the global context level
+# or we'll hit circular deps.
+def use_inf_as_null_cb(key):
+ from pandas.core.common import _use_inf_as_null
+ _use_inf_as_null(key)
+
+with cf.config_prefix('mode'):
+ cf.register_option('use_inf_as_null', False, use_inf_as_null_doc,
+ cb=use_inf_as_null_cb)
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index e8be2a5cc9c4c..e86df7204ab5c 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -5,9 +5,10 @@
import unittest
from pandas import Series, DataFrame, date_range, DatetimeIndex
-from pandas.core.common import notnull, isnull, use_inf_as_null
+from pandas.core.common import notnull, isnull
import pandas.core.common as com
import pandas.util.testing as tm
+import pandas.core.config as cf
import numpy as np
@@ -29,20 +30,19 @@ def test_notnull():
assert not notnull(None)
assert not notnull(np.NaN)
- use_inf_as_null(False)
- assert notnull(np.inf)
- assert notnull(-np.inf)
+ with cf.option_context("mode.use_inf_as_null",False):
+ assert notnull(np.inf)
+ assert notnull(-np.inf)
- use_inf_as_null(True)
- assert not notnull(np.inf)
- assert not notnull(-np.inf)
+ with cf.option_context("mode.use_inf_as_null",True):
+ assert not notnull(np.inf)
+ assert not notnull(-np.inf)
- use_inf_as_null(False)
-
- float_series = Series(np.random.randn(5))
- obj_series = Series(np.random.randn(5), dtype=object)
- assert(isinstance(notnull(float_series), Series))
- assert(isinstance(notnull(obj_series), Series))
+ with cf.option_context("mode.use_inf_as_null",False):
+ float_series = Series(np.random.randn(5))
+ obj_series = Series(np.random.randn(5), dtype=object)
+ assert(isinstance(notnull(float_series), Series))
+ assert(isinstance(notnull(obj_series), Series))
def test_isnull():
assert not isnull(1.)
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 814281dfc25e9..14d0a115c6c20 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -282,6 +282,46 @@ def test_config_prefix(self):
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b'), 2)
+ def test_callback(self):
+ k=[None]
+ v=[None]
+ def callback(key):
+ k.append(key)
+ v.append(self.cf.get_option(key))
+
+ self.cf.register_option('d.a', 'foo',cb=callback)
+ self.cf.register_option('d.b', 'foo',cb=callback)
+
+ del k[-1],v[-1]
+ self.cf.set_option("d.a","fooz")
+ self.assertEqual(k[-1],"d.a")
+ self.assertEqual(v[-1],"fooz")
+
+ del k[-1],v[-1]
+ self.cf.set_option("d.b","boo")
+ self.assertEqual(k[-1],"d.b")
+ self.assertEqual(v[-1],"boo")
+
+ del k[-1],v[-1]
+ self.cf.reset_option("d.b")
+ self.assertEqual(k[-1],"d.b")
+
+
+ def test_set_ContextManager(self):
+ def eq(val):
+ self.assertEqual(self.cf.get_option("a"),val)
+
+ self.cf.register_option('a',0)
+ eq(0)
+ with self.cf.option_context("a",15):
+ eq(15)
+ with self.cf.option_context("a",25):
+ eq(25)
+ eq(15)
+ eq(0)
+
+ self.cf.set_option("a",17)
+ eq(17)
# fmt.reset_printoptions and fmt.set_printoptions were altered
# to use core.config, test_format exercises those paths.
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 8b4866a643dde..0caef2cd58ef3 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -21,7 +21,7 @@
import pandas
import pandas as pd
from pandas.core.config import (set_option, get_option,
- reset_option)
+ option_context, reset_option)
_frame = DataFrame(tm.getSeriesData())
@@ -75,26 +75,26 @@ def test_repr_tuples(self):
def test_repr_truncation(self):
max_len = 20
- set_option("print.max_colwidth", max_len)
- df = DataFrame({'A': np.random.randn(10),
- 'B': [tm.rands(np.random.randint(max_len - 1,
- max_len + 1)) for i in range(10)]})
- r = repr(df)
- r = r[r.find('\n') + 1:]
-
- _strlen = fmt._strlen_func()
-
- for line, value in zip(r.split('\n'), df['B']):
- if _strlen(value) + 1 > max_len:
- self.assert_('...' in line)
- else:
- self.assert_('...' not in line)
+ with option_context("print.max_colwidth", max_len):
+ df = DataFrame({'A': np.random.randn(10),
+ 'B': [tm.rands(np.random.randint(max_len - 1,
+ max_len + 1)) for i in range(10)]})
+ r = repr(df)
+ r = r[r.find('\n') + 1:]
+
+ _strlen = fmt._strlen_func()
+
+ for line, value in zip(r.split('\n'), df['B']):
+ if _strlen(value) + 1 > max_len:
+ self.assert_('...' in line)
+ else:
+ self.assert_('...' not in line)
- set_option("print.max_colwidth", 999999)
- self.assert_('...' not in repr(df))
+ with option_context("print.max_colwidth", 999999):
+ self.assert_('...' not in repr(df))
- set_option("print.max_colwidth", max_len + 2)
- self.assert_('...' not in repr(df))
+ with option_context("print.max_colwidth", max_len + 2):
+ self.assert_('...' not in repr(df))
def test_repr_should_return_str (self):
"""
@@ -112,11 +112,10 @@ def test_repr_should_return_str (self):
self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
def test_repr_no_backslash(self):
- pd.set_option('test.interactive', True)
- df = DataFrame(np.random.randn(10, 4))
+ with option_context('mode.sim_interactive', True):
+ df = DataFrame(np.random.randn(10, 4))
+ self.assertTrue('\\' not in repr(df))
- self.assertTrue('\\' not in repr(df))
- pd.reset_option('test.interactive')
def test_to_string_repr_unicode(self):
buf = StringIO()
@@ -409,132 +408,119 @@ def test_frame_info_encoding(self):
fmt.set_printoptions(max_rows=200)
def test_wide_repr(self):
- set_option('test.interactive', True)
- col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
- df = DataFrame([col(20, 25) for _ in range(10)])
- set_option('print.expand_frame_repr', False)
- rep_str = repr(df)
- set_option('print.expand_frame_repr', True)
- wide_repr = repr(df)
- self.assert_(rep_str != wide_repr)
-
- set_option('print.line_width', 120)
- wider_repr = repr(df)
- self.assert_(len(wider_repr) < len(wide_repr))
+ with option_context('mode.sim_interactive', True):
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ df = DataFrame([col(20, 25) for _ in range(10)])
+ set_option('print.expand_frame_repr', False)
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ with option_context('print.line_width', 120):
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
- set_option('print.line_width', 80)
def test_wide_repr_wide_columns(self):
- set_option('test.interactive', True)
- df = DataFrame(randn(5, 3), columns=['a' * 90, 'b' * 90, 'c' * 90])
- rep_str = repr(df)
+ with option_context('mode.sim_interactive', True):
+ df = DataFrame(randn(5, 3), columns=['a' * 90, 'b' * 90, 'c' * 90])
+ rep_str = repr(df)
- self.assert_(len(rep_str.splitlines()) == 20)
- reset_option('test.interactive')
+ self.assert_(len(rep_str.splitlines()) == 20)
def test_wide_repr_named(self):
- set_option('test.interactive', True)
- col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
- df = DataFrame([col(20, 25) for _ in range(10)])
- df.index.name = 'DataFrame Index'
- set_option('print.expand_frame_repr', False)
+ with option_context('mode.sim_interactive', True):
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ df = DataFrame([col(20, 25) for _ in range(10)])
+ df.index.name = 'DataFrame Index'
+ set_option('print.expand_frame_repr', False)
- rep_str = repr(df)
- set_option('print.expand_frame_repr', True)
- wide_repr = repr(df)
- self.assert_(rep_str != wide_repr)
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
- set_option('print.line_width', 120)
- wider_repr = repr(df)
- self.assert_(len(wider_repr) < len(wide_repr))
+ with option_context('print.line_width', 120):
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
- for line in wide_repr.splitlines()[1::13]:
- self.assert_('DataFrame Index' in line)
+ for line in wide_repr.splitlines()[1::13]:
+ self.assert_('DataFrame Index' in line)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
- set_option('print.line_width', 80)
def test_wide_repr_multiindex(self):
- set_option('test.interactive', True)
- col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
- midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
- np.array(col(10, 5))])
- df = DataFrame([col(20, 25) for _ in range(10)],
- index=midx)
- df.index.names = ['Level 0', 'Level 1']
- set_option('print.expand_frame_repr', False)
- rep_str = repr(df)
- set_option('print.expand_frame_repr', True)
- wide_repr = repr(df)
- self.assert_(rep_str != wide_repr)
-
- set_option('print.line_width', 120)
- wider_repr = repr(df)
- self.assert_(len(wider_repr) < len(wide_repr))
-
- for line in wide_repr.splitlines()[1::13]:
- self.assert_('Level 0 Level 1' in line)
+ with option_context('mode.sim_interactive', True):
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
+ np.array(col(10, 5))])
+ df = DataFrame([col(20, 25) for _ in range(10)],
+ index=midx)
+ df.index.names = ['Level 0', 'Level 1']
+ set_option('print.expand_frame_repr', False)
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ with option_context('print.line_width', 120):
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
+
+ for line in wide_repr.splitlines()[1::13]:
+ self.assert_('Level 0 Level 1' in line)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
- set_option('print.line_width', 80)
def test_wide_repr_multiindex_cols(self):
- set_option('test.interactive', True)
- col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
- midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
- np.array(col(10, 5))])
- mcols = pandas.MultiIndex.from_arrays([np.array(col(20, 3)),
- np.array(col(20, 3))])
- df = DataFrame([col(20, 25) for _ in range(10)],
- index=midx, columns=mcols)
- df.index.names = ['Level 0', 'Level 1']
- set_option('print.expand_frame_repr', False)
- rep_str = repr(df)
- set_option('print.expand_frame_repr', True)
- wide_repr = repr(df)
- self.assert_(rep_str != wide_repr)
-
- set_option('print.line_width', 120)
- wider_repr = repr(df)
- self.assert_(len(wider_repr) < len(wide_repr))
-
- self.assert_(len(wide_repr.splitlines()) == 14 * 10 - 1)
+ with option_context('mode.sim_interactive', True):
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
+ np.array(col(10, 5))])
+ mcols = pandas.MultiIndex.from_arrays([np.array(col(20, 3)),
+ np.array(col(20, 3))])
+ df = DataFrame([col(20, 25) for _ in range(10)],
+ index=midx, columns=mcols)
+ df.index.names = ['Level 0', 'Level 1']
+ set_option('print.expand_frame_repr', False)
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ with option_context('print.line_width', 120):
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
+ self.assert_(len(wide_repr.splitlines()) == 14 * 10 - 1)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
- set_option('print.line_width', 80)
def test_wide_repr_unicode(self):
- set_option('test.interactive', True)
- col = lambda l, k: [tm.randu(k) for _ in xrange(l)]
- df = DataFrame([col(20, 25) for _ in range(10)])
- set_option('print.expand_frame_repr', False)
- rep_str = repr(df)
- set_option('print.expand_frame_repr', True)
- wide_repr = repr(df)
- self.assert_(rep_str != wide_repr)
-
- set_option('print.line_width', 120)
- wider_repr = repr(df)
- self.assert_(len(wider_repr) < len(wide_repr))
+ with option_context('mode.sim_interactive', True):
+ col = lambda l, k: [tm.randu(k) for _ in xrange(l)]
+ df = DataFrame([col(20, 25) for _ in range(10)])
+ set_option('print.expand_frame_repr', False)
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ with option_context('print.line_width', 120):
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
- set_option('print.line_width', 80)
- def test_wide_repr_wide_long_columns(self):
- set_option('test.interactive', True)
- df = DataFrame({'a': ['a'*30, 'b'*30], 'b': ['c'*70, 'd'*80]})
+ def test_wide_repr_wide_long_columns(self):
+ with option_context('mode.sim_interactive', True):
+ df = DataFrame({'a': ['a'*30, 'b'*30], 'b': ['c'*70, 'd'*80]})
- result = repr(df)
- self.assertTrue('ccccc' in result)
- self.assertTrue('ddddd' in result)
- set_option('test.interactive', False)
+ result = repr(df)
+ self.assertTrue('ccccc' in result)
+ self.assertTrue('ddddd' in result)
def test_to_string(self):
from pandas import read_table
| #2507 (partial)
best merge this soonish to save constant rebasing and conflicts.
rebased on top of #2504
| https://api.github.com/repos/pandas-dev/pandas/pulls/2516 | 2012-12-13T08:15:27Z | 2012-12-13T14:50:39Z | 2012-12-13T14:50:39Z | 2014-07-11T02:59:46Z |
BUG: fix type spec bug in roll_window | diff --git a/ci/install.sh b/ci/install.sh
index 139382960b81a..089c79aeb55cd 100755
--- a/ci/install.sh
+++ b/ci/install.sh
@@ -39,6 +39,7 @@ if [ x"$FULL_DEPS" == x"true" ]; then
pip install $PIP_ARGS --use-mirrors openpyxl pytz matplotlib;
pip install $PIP_ARGS --use-mirrors xlrd xlwt;
+ pip install $PIP_ARGS 'http://downloads.sourceforge.net/project/pytseries/scikits.timeseries/0.91.3/scikits.timeseries-0.91.3.tar.gz?r='
fi
if [ x"$VBENCH" == x"true" ]; then
diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index 0edc0c167ba10..664067c55222c 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -1712,15 +1712,14 @@ def roll_window(ndarray[float64_t, ndim=1, cast=True] input,
Assume len(weights) << len(input)
"""
cdef:
- ndarray[double_t] output, tot_wgt
- ndarray[int64_t] counts
+ ndarray[double_t] output, tot_wgt, counts
Py_ssize_t in_i, win_i, win_n, win_k, in_n, in_k
float64_t val_in, val_win, c, w
in_n = len(input)
win_n = len(weights)
output = np.zeros(in_n, dtype=float)
- counts = np.zeros(in_n, dtype=int)
+ counts = np.zeros(in_n, dtype=float)
if avg:
tot_wgt = np.zeros(in_n, dtype=float)
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index 4fea4c4b24180..7f47a9843026b 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -299,8 +299,8 @@ def _center_window(rs, window, axis):
na_indexer = [slice(None)] * rs.ndim
na_indexer[axis] = slice(-offset, None)
- rs[rs_indexer] = rs[lead_indexer]
- rs[na_indexer] = np.nan
+ rs[tuple(rs_indexer)] = rs[tuple(lead_indexer)]
+ rs[tuple(na_indexer)] = np.nan
return rs
def _process_data_structure(arg, kill_inf=True):
| https://api.github.com/repos/pandas-dev/pandas/pulls/2508 | 2012-12-12T16:42:33Z | 2012-12-12T16:44:47Z | null | 2014-06-30T15:08:57Z | |
vivify issue links in RELEASE.rst | diff --git a/RELEASE.rst b/RELEASE.rst
index 26f3265029696..680eaec23cf1d 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -31,7 +31,7 @@ pandas 0.10.0
- Brand new high-performance delimited file parsing engine written in C and
Cython. 50% or better performance in many standard use cases with a
- fraction as much memory usage. (#407, #821)
+ fraction as much memory usage. (GH407_, GH821_)
- Many new file parser (read_csv, read_table) features:
- Support for on-the-fly gzip or bz2 decompression (`compression` option)
@@ -39,7 +39,7 @@ pandas 0.10.0
(`as_recarray=True`)
- `dtype` option: explicit column dtypes
- `usecols` option: specify list of columns to be read from a file. Good
- for reading very wide files with many irrelevant columns (#1216 #926, #2465)
+ for reading very wide files with many irrelevant columns (GH1216_ GH926_, GH2465_)
- Enhanced unicode decoding support via `encoding` option
- `skipinitialspace` dialect option
- Can specify strings to be recognized as True (`true_values`) or False
@@ -51,55 +51,55 @@ pandas 0.10.0
options)
- Substantially improved performance in the parsing of integers with
thousands markers and lines with comments
- - Easy of European (and other) decimal formats (`decimal` option) (#584, #2466)
- - Custom line terminators (e.g. lineterminator='~') (#2457)
- - Handling of no trailing commas in CSV files (#2333)
- - Ability to handle fractional seconds in date_converters (#2209)
- - read_csv allow scalar arg to na_values (#1944)
- - Explicit column dtype specification in read_* functions (#1858)
- - Easier CSV dialect specification (#1743)
- - Improve parser performance when handling special characters (#1204)
-
- - Google Analytics API integration with easy oauth2 workflow (#2283)
- - Add error handling to Series.str.encode/decode (#2276)
- - Add ``where`` and ``mask`` to Series (#2337)
- - Grouped histogram via `by` keyword in Series/DataFrame.hist (#2186)
+ - Easy of European (and other) decimal formats (`decimal` option) (GH584_, GH2466_)
+ - Custom line terminators (e.g. lineterminator='~') (GH2457_)
+ - Handling of no trailing commas in CSV files (GH2333_)
+ - Ability to handle fractional seconds in date_converters (GH2209_)
+ - read_csv allow scalar arg to na_values (GH1944_)
+ - Explicit column dtype specification in read_* functions (GH1858_)
+ - Easier CSV dialect specification (GH1743_)
+ - Improve parser performance when handling special characters (GH1204_)
+
+ - Google Analytics API integration with easy oauth2 workflow (GH2283_)
+ - Add error handling to Series.str.encode/decode (GH2276_)
+ - Add ``where`` and ``mask`` to Series (GH2337_)
+ - Grouped histogram via `by` keyword in Series/DataFrame.hist (GH2186_)
- Support optional ``min_periods`` keyword in ``corr`` and ``cov``
- for both Series and DataFrame (#2002)
- - Add ``duplicated`` and ``drop_duplicates`` functions to Series (#1923)
+ for both Series and DataFrame (GH2002_)
+ - Add ``duplicated`` and ``drop_duplicates`` functions to Series (GH1923_)
- Add docs for ``HDFStore table`` format
- - 'density' property in `SparseSeries` (#2384)
+ - 'density' property in `SparseSeries` (GH2384_)
- Add ``ffill`` and ``bfill`` convenience functions for forward- and
- backfilling time series data (#2284)
+ backfilling time series data (GH2284_)
- New option configuration system and functions `set_option`, `get_option`,
`describe_option`, and `reset_option`. Deprecate `set_printoptions` and
- `reset_printoptions` (#2393)
+ `reset_printoptions` (GH2393_)
- Wide DataFrames can be viewed more easily in the console with new
`expand_frame_repr` and `line_width` configuration options. This is on by
- default now (#2436)
- - Scikits.timeseries-like moving window functions via ``rolling_window`` (#1270)
+ default now (GH2436_)
+ - Scikits.timeseries-like moving window functions via ``rolling_window`` (GH1270_)
**Experimental Features**
- Add support for Panel4D, a named 4 Dimensional stucture
- Add support for ndpanel factory functions, to create custom,
domain-specific N-Dimensional containers
- - 'density' property in `SparseSeries` (#2384)
+ - 'density' property in `SparseSeries` (GH2384_)
**API Changes**
- The default binning/labeling behavior for ``resample`` has been changed to
`closed='left', label='left'` for daily and lower frequencies. This had
been a large source of confusion for users. See "what's new" page for more
- on this. (#2410)
+ on this. (GH2410_)
- Methods with ``inplace`` option now return None instead of the calling
- (modified) object (#1893)
+ (modified) object (GH1893_)
- The special case DataFrame - TimeSeries doing column-by-column broadcasting
has been deprecated. Users should explicitly do e.g. df.sub(ts, axis=0)
instead. This is a legacy hack and can lead to subtle bugs.
- inf/-inf are no longer considered as NA by isnull/notnull. To be clear, this
is legacy cruft from early pandas. This behavior can be globally re-enabled
- using pandas.core.common.use_inf_as_na (#2050, #1919)
+ using the new option ``mode.use_inf_as_null`` (GH2050_, GH1919_)
- ``pandas.merge`` will now default to ``sort=False``. For many use cases
sorting the join keys is not necessary, and doing it by default is wasteful
- ``names`` handling in file parsing: if explicit column `names` passed,
@@ -108,129 +108,243 @@ pandas 0.10.0
passing ``names``
- Default column names for header-less parsed files (yielded by read_csv,
etc.) are now the integers 0, 1, .... A new argument `prefix` has been
- added; to get the v0.9.x behavior specify ``prefix='X'`` (#2034). This API
+ added; to get the v0.9.x behavior specify ``prefix='X'`` (GH2034_). This API
change was made to make the default column names more consistent with the
DataFrame constructor's default column names when none are specified.
- DataFrame selection using a boolean frame now preserves input shape
- If function passed to Series.apply yields a Series, result will be a
- DataFrame (#2316)
+ DataFrame (GH2316_)
- Values like YES/NO/yes/no will not be considered as boolean by default any
longer in the file parsers. This can be customized using the new
- ``true_values`` and ``false_values`` options (#2360)
+ ``true_values`` and ``false_values`` options (GH2360_)
- `obj.fillna()` is no longer valid; make `method='pad'` no longer the
default option, to be more explicit about what kind of filling to
- perform. Add `ffill/bfill` convenience functions per above (#2284)
+ perform. Add `ffill/bfill` convenience functions per above (GH2284_)
- `HDFStore.keys()` now returns an absolute path-name for each key
- - `to_string()` now always returns a unicode string. (#2224)
+ - `to_string()` now always returns a unicode string. (GH2224_)
- File parsers will not handle NA sentinel values arising from passed
converter functions
**Improvements to existing features**
- - Add ``nrows`` option to DataFrame.from_records for iterators (#1794)
+ - Add ``nrows`` option to DataFrame.from_records for iterators (GH1794_)
- Unstack/reshape algorithm rewrite to avoid high memory use in cases where
the number of observed key-tuples is much smaller than the total possible
- number that could occur (#2278). Also improves performance in most cases.
- - Support duplicate columns in DataFrame.from_records (#2179)
- - Add ``normalize`` option to Series/DataFrame.asfreq (#2137)
+ number that could occur (GH2278_). Also improves performance in most cases.
+ - Support duplicate columns in DataFrame.from_records (GH2179_)
+ - Add ``normalize`` option to Series/DataFrame.asfreq (GH2137_)
- SparseSeries and SparseDataFrame construction from empty and scalar
- values now no longer create dense ndarrays unnecessarily (#2322)
- - ``HDFStore`` now supports hierarchial keys (#2397)
- - Support multiple query selection formats for ``HDFStore tables`` (#1996)
+ values now no longer create dense ndarrays unnecessarily (GH2322_)
+ - ``HDFStore`` now supports hierarchial keys (GH2397_)
+ - Support multiple query selection formats for ``HDFStore tables`` (GH1996_)
- Support ``del store['df']`` syntax to delete HDFStores
- Add multi-dtype support for ``HDFStore tables``
- ``min_itemsize`` parameter can be specified in ``HDFStore table`` creation
- - Indexing support in ``HDFStore tables`` (#698)
- - Add `line_terminator` option to DataFrame.to_csv (#2383)
+ - Indexing support in ``HDFStore tables`` (GH698_)
+ - Add `line_terminator` option to DataFrame.to_csv (GH2383_)
- added implementation of str(x)/unicode(x)/bytes(x) to major pandas data
- structures, which should do the right thing on both py2.x and py3.x. (#2224)
+ structures, which should do the right thing on both py2.x and py3.x. (GH2224_)
- Reduce groupby.apply overhead substantially by low-level manipulation of
- internal NumPy arrays in DataFrames (#535)
+ internal NumPy arrays in DataFrames (GH535_)
- Implement ``value_vars`` in ``melt`` and add ``melt`` to pandas namespace
- (#2412)
+ (GH2412_)
- Added boolean comparison operators to Panel
- - Enable ``Series.str.strip/lstrip/rstrip`` methods to take an argument (#2411)
+ - Enable ``Series.str.strip/lstrip/rstrip`` methods to take an argument (GH2411_)
- The DataFrame ctor now respects column ordering when given
- an OrderedDict (#2455)
- - Assigning DatetimeIndex to Series changes the class to TimeSeries (#2139)
- - Improve performance of .value_counts method on non-integer data (#2480)
- - ``get_level_values`` method for MultiIndex return Index instead of ndarray (#2449)
- - ``convert_to_r_dataframe`` conversion for datetime values (#2351)
- - Allow ``DataFrame.to_csv`` to represent inf and nan differently (#2026)
- - Add ``min_i`` argument to ``nancorr`` to specify minimum required observations (#2002)
- - Add ``inplace`` option to ``sortlevel`` / ``sort`` functions on DataFrame (#1873)
- - Enable DataFrame to accept scalar constructor values like Series (#1856)
- - DataFrame.from_records now takes optional ``size`` parameter (#1794)
- - include iris dataset (#1709)
- - No datetime64 DataFrame column conversion of datetime.datetime with tzinfo (#1581)
- - Micro-optimizations in DataFrame for tracking state of internal consolidation (#217)
- - Format parameter in DataFrame.to_csv (#1525)
- - Partial string slicing for ``DatetimeIndex`` for daily and higher frequencies (#2306)
- - Implement ``col_space`` parameter in ``to_html`` and ``to_string`` in DataFrame (#1000)
- - Override ``Series.tolist`` and box datetime64 types (#2447)
- - Optimize ``unstack`` memory usage by compressing indices (#2278)
- - Fix HTML repr in IPython qtconsole if opening window is small (#2275)
- - Escape more special characters in console output (#2492)
+ an OrderedDict (GH2455_)
+ - Assigning DatetimeIndex to Series changes the class to TimeSeries (GH2139_)
+ - Improve performance of .value_counts method on non-integer data (GH2480_)
+ - ``get_level_values`` method for MultiIndex return Index instead of ndarray (GH2449_)
+ - ``convert_to_r_dataframe`` conversion for datetime values (GH2351_)
+ - Allow ``DataFrame.to_csv`` to represent inf and nan differently (GH2026_)
+ - Add ``min_i`` argument to ``nancorr`` to specify minimum required observations (GH2002_)
+ - Add ``inplace`` option to ``sortlevel`` / ``sort`` functions on DataFrame (GH1873_)
+ - Enable DataFrame to accept scalar constructor values like Series (GH1856_)
+ - DataFrame.from_records now takes optional ``size`` parameter (GH1794_)
+ - include iris dataset (GH1709_)
+ - No datetime64 DataFrame column conversion of datetime.datetime with tzinfo (GH1581_)
+ - Micro-optimizations in DataFrame for tracking state of internal consolidation (GH217_)
+ - Format parameter in DataFrame.to_csv (GH1525_)
+ - Partial string slicing for ``DatetimeIndex`` for daily and higher frequencies (GH2306_)
+ - Implement ``col_space`` parameter in ``to_html`` and ``to_string`` in DataFrame (GH1000_)
+ - Override ``Series.tolist`` and box datetime64 types (GH2447_)
+ - Optimize ``unstack`` memory usage by compressing indices (GH2278_)
+ - Fix HTML repr in IPython qtconsole if opening window is small (GH2275_)
+ - Escape more special characters in console output (GH2492_)
**Bug fixes**
- - Fix major performance regression in DataFrame.iteritems (#2273)
- - Fixes bug when negative period passed to Series/DataFrame.diff (#2266)
- - Escape tabs in console output to avoid alignment issues (#2038)
+ - Fix major performance regression in DataFrame.iteritems (GH2273_)
+ - Fixes bug when negative period passed to Series/DataFrame.diff (GH2266_)
+ - Escape tabs in console output to avoid alignment issues (GH2038_)
- Properly box datetime64 values when retrieving cross-section from
- mixed-dtype DataFrame (#2272)
- - Fix concatenation bug leading to #2057, #2257
- - Fix regression in Index console formatting (#2319)
- - Box Period data when assigning PeriodIndex to frame column (#2243, #2281)
- - Raise exception on calling reset_index on Series with inplace=True (#2277)
+ mixed-dtype DataFrame (GH2272_)
+ - Fix concatenation bug leading to GH2057_, GH2257_
+ - Fix regression in Index console formatting (GH2319_)
+ - Box Period data when assigning PeriodIndex to frame column (GH2243_, GH2281_)
+ - Raise exception on calling reset_index on Series with inplace=True (GH2277_)
- Enable setting multiple columns in DataFrame with hierarchical columns
- (#2295)
- - Respect dtype=object in DataFrame constructor (#2291)
- - Fix DatetimeIndex.join bug with tz-aware indexes and how='outer' (#2317)
- - pop(...) and del works with DataFrame with duplicate columns (#2349)
+ (GH2295_)
+ - Respect dtype=object in DataFrame constructor (GH2291_)
+ - Fix DatetimeIndex.join bug with tz-aware indexes and how='outer' (GH2317_)
+ - pop(...) and del works with DataFrame with duplicate columns (GH2349_)
- Treat empty strings as NA in date parsing (rather than let dateutil do
- something weird) (#2263)
- - Prevent uint64 -> int64 overflows (#2355)
- - Enable joins between MultiIndex and regular Index (#2024)
+ something weird) (GH2263_)
+ - Prevent uint64 -> int64 overflows (GH2355_)
+ - Enable joins between MultiIndex and regular Index (GH2024_)
- Fix time zone metadata issue when unioning non-overlapping DatetimeIndex
- objects (#2367)
- - Raise/handle int64 overflows in parsers (#2247)
+ objects (GH2367_)
+ - Raise/handle int64 overflows in parsers (GH2247_)
- Deleting of consecutive rows in ``HDFStore tables``` is much faster than before
- Appending on a HDFStore would fail if the table was not first created via ``put``
- - Use `col_space` argument as minimum column width in DataFrame.to_html (#2328)
- - Fix tz-aware DatetimeIndex.to_period (#2232)
- - Fix DataFrame row indexing case with MultiIndex (#2314)
- - Fix to_excel exporting issues with Timestamp objects in index (#2294)
- - Fixes assigning scalars and array to hierarchical column chunk (#1803)
- - Fixed a UnicdeDecodeError with series tidy_repr (#2225)
- - Fixed issued with duplicate keys in an index (#2347, #2380)
- - Fixed issues re: Hash randomization, default on starting w/ py3.3 (#2331)
- - Fixed issue with missing attributes after loading a pickled dataframe (#2431)
- - Fix Timestamp formatting with tzoffset time zone in dateutil 2.1 (#2443)
- - Fix GroupBy.apply issue when using BinGrouper to do ts binning (#2300)
+ - Use `col_space` argument as minimum column width in DataFrame.to_html (GH2328_)
+ - Fix tz-aware DatetimeIndex.to_period (GH2232_)
+ - Fix DataFrame row indexing case with MultiIndex (GH2314_)
+ - Fix to_excel exporting issues with Timestamp objects in index (GH2294_)
+ - Fixes assigning scalars and array to hierarchical column chunk (GH1803_)
+ - Fixed a UnicdeDecodeError with series tidy_repr (GH2225_)
+ - Fixed issued with duplicate keys in an index (GH2347_, GH2380_)
+ - Fixed issues re: Hash randomization, default on starting w/ py3.3 (GH2331_)
+ - Fixed issue with missing attributes after loading a pickled dataframe (GH2431_)
+ - Fix Timestamp formatting with tzoffset time zone in dateutil 2.1 (GH2443_)
+ - Fix GroupBy.apply issue when using BinGrouper to do ts binning (GH2300_)
- Fix issues resulting from datetime.datetime columns being converted to
- datetime64 when calling DataFrame.apply. (#2374)
- - Raise exception when calling to_panel on non uniquely-indexed frame (#2441)
- - Improved detection of console encoding on IPython zmq frontends (#2458)
- - Preserve time zone when .append-ing two time series (#2260)
+ datetime64 when calling DataFrame.apply. (GH2374_)
+ - Raise exception when calling to_panel on non uniquely-indexed frame (GH2441_)
+ - Improved detection of console encoding on IPython zmq frontends (GH2458_)
+ - Preserve time zone when .append-ing two time series (GH2260_)
- Box timestamps when calling reset_index on time-zone-aware index rather
- than creating a tz-less datetime64 column (#2262)
- - Enable searching non-string columns in DataFrame.filter(like=...) (#2467)
- - Fixed issue with losing nanosecond precision upon conversion to DatetimeIndex(#2252)
- - Handle timezones in Datetime.normalize (#2338)
+ than creating a tz-less datetime64 column (GH2262_)
+ - Enable searching non-string columns in DataFrame.filter(like=...) (GH2467_)
+ - Fixed issue with losing nanosecond precision upon conversion to DatetimeIndex(GH2252_)
+ - Handle timezones in Datetime.normalize (GH2338_)
- Fix test case where dtype specification with endianness causes
- failures on big endian machines (#2318)
- - Fix plotting bug where upsampling causes data to appear shifted in time (#2448)
- - Fix ``read_csv`` failure for UTF-16 with BOM and skiprows(#2298)
- - read_csv with names arg not implicitly setting header=None(#2459)
- - Unrecognized compression mode causes segfault in read_csv(#2474)
- - In read_csv, header=0 and passed names should discard first row(#2269)
- - Correctly route to stdout/stderr in read_table (#2071)
- - Fix exception when Timestamp.to_datetime is called on a Timestamp with tzoffset (#2471)
- - Fixed unintentional conversion of datetime64 to long in groupby.first() (#2133)
- - Union of empty DataFrames now return empty with concatenated index (#2307)
+ failures on big endian machines (GH2318_)
+ - Fix plotting bug where upsampling causes data to appear shifted in time (GH2448_)
+ - Fix ``read_csv`` failure for UTF-16 with BOM and skiprows(GH2298_)
+ - read_csv with names arg not implicitly setting header=None(GH2459_)
+ - Unrecognized compression mode causes segfault in read_csv(GH2474_)
+ - In read_csv, header=0 and passed names should discard first row(GH2269_)
+ - Correctly route to stdout/stderr in read_table (GH2071_)
+ - Fix exception when Timestamp.to_datetime is called on a Timestamp with tzoffset (GH2471_)
+ - Fixed unintentional conversion of datetime64 to long in groupby.first() (GH2133_)
+ - Union of empty DataFrames now return empty with concatenated index (GH2307_)
- DataFrame.sort_index raises more helpful exception if sorting by column
- with duplicates (#2488)
+ with duplicates (GH2488_)
+
+.. _GH407: https://github.com/pydata/pandas/issues/407
+.. _GH821: https://github.com/pydata/pandas/issues/821
+.. _GH1216: https://github.com/pydata/pandas/issues/1216
+.. _GH926: https://github.com/pydata/pandas/issues/926
+.. _GH2465: https://github.com/pydata/pandas/issues/2465
+.. _GH584: https://github.com/pydata/pandas/issues/584
+.. _GH2466: https://github.com/pydata/pandas/issues/2466
+.. _GH2457: https://github.com/pydata/pandas/issues/2457
+.. _GH2333: https://github.com/pydata/pandas/issues/2333
+.. _GH2209: https://github.com/pydata/pandas/issues/2209
+.. _GH1944: https://github.com/pydata/pandas/issues/1944
+.. _GH1858: https://github.com/pydata/pandas/issues/1858
+.. _GH1743: https://github.com/pydata/pandas/issues/1743
+.. _GH1204: https://github.com/pydata/pandas/issues/1204
+.. _GH2283: https://github.com/pydata/pandas/issues/2283
+.. _GH2276: https://github.com/pydata/pandas/issues/2276
+.. _GH2337: https://github.com/pydata/pandas/issues/2337
+.. _GH2186: https://github.com/pydata/pandas/issues/2186
+.. _GH2002: https://github.com/pydata/pandas/issues/2002
+.. _GH1923: https://github.com/pydata/pandas/issues/1923
+.. _GH2384: https://github.com/pydata/pandas/issues/2384
+.. _GH2284: https://github.com/pydata/pandas/issues/2284
+.. _GH2393: https://github.com/pydata/pandas/issues/2393
+.. _GH2436: https://github.com/pydata/pandas/issues/2436
+.. _GH1270: https://github.com/pydata/pandas/issues/1270
+.. _GH2410: https://github.com/pydata/pandas/issues/2410
+.. _GH1893: https://github.com/pydata/pandas/issues/1893
+.. _GH2050: https://github.com/pydata/pandas/issues/2050
+.. _GH1919: https://github.com/pydata/pandas/issues/1919
+.. _GH2034: https://github.com/pydata/pandas/issues/2034
+.. _GH2316: https://github.com/pydata/pandas/issues/2316
+.. _GH2360: https://github.com/pydata/pandas/issues/2360
+.. _GH2224: https://github.com/pydata/pandas/issues/2224
+.. _GH1794: https://github.com/pydata/pandas/issues/1794
+.. _GH2278: https://github.com/pydata/pandas/issues/2278
+.. _GH2179: https://github.com/pydata/pandas/issues/2179
+.. _GH2137: https://github.com/pydata/pandas/issues/2137
+.. _GH2322: https://github.com/pydata/pandas/issues/2322
+.. _GH2397: https://github.com/pydata/pandas/issues/2397
+.. _GH1996: https://github.com/pydata/pandas/issues/1996
+.. _GH698: https://github.com/pydata/pandas/issues/698
+.. _GH2383: https://github.com/pydata/pandas/issues/2383
+.. _GH535: https://github.com/pydata/pandas/issues/535
+.. _GH2412: https://github.com/pydata/pandas/issues/2412
+.. _GH2411: https://github.com/pydata/pandas/issues/2411
+.. _GH2455: https://github.com/pydata/pandas/issues/2455
+.. _GH2139: https://github.com/pydata/pandas/issues/2139
+.. _GH2480: https://github.com/pydata/pandas/issues/2480
+.. _GH2449: https://github.com/pydata/pandas/issues/2449
+.. _GH2351: https://github.com/pydata/pandas/issues/2351
+.. _GH2026: https://github.com/pydata/pandas/issues/2026
+.. _GH1873: https://github.com/pydata/pandas/issues/1873
+.. _GH1856: https://github.com/pydata/pandas/issues/1856
+.. _GH1709: https://github.com/pydata/pandas/issues/1709
+.. _GH1581: https://github.com/pydata/pandas/issues/1581
+.. _GH217: https://github.com/pydata/pandas/issues/217
+.. _GH1525: https://github.com/pydata/pandas/issues/1525
+.. _GH2306: https://github.com/pydata/pandas/issues/2306
+.. _GH1000: https://github.com/pydata/pandas/issues/1000
+.. _GH2447: https://github.com/pydata/pandas/issues/2447
+.. _GH2275: https://github.com/pydata/pandas/issues/2275
+.. _GH2492: https://github.com/pydata/pandas/issues/2492
+.. _GH2273: https://github.com/pydata/pandas/issues/2273
+.. _GH2266: https://github.com/pydata/pandas/issues/2266
+.. _GH2038: https://github.com/pydata/pandas/issues/2038
+.. _GH2272: https://github.com/pydata/pandas/issues/2272
+.. _GH2057: https://github.com/pydata/pandas/issues/2057
+.. _GH2257: https://github.com/pydata/pandas/issues/2257
+.. _GH2319: https://github.com/pydata/pandas/issues/2319
+.. _GH2243: https://github.com/pydata/pandas/issues/2243
+.. _GH2281: https://github.com/pydata/pandas/issues/2281
+.. _GH2277: https://github.com/pydata/pandas/issues/2277
+.. _GH2295: https://github.com/pydata/pandas/issues/2295
+.. _GH2291: https://github.com/pydata/pandas/issues/2291
+.. _GH2317: https://github.com/pydata/pandas/issues/2317
+.. _GH2349: https://github.com/pydata/pandas/issues/2349
+.. _GH2263: https://github.com/pydata/pandas/issues/2263
+.. _GH2355: https://github.com/pydata/pandas/issues/2355
+.. _GH2024: https://github.com/pydata/pandas/issues/2024
+.. _GH2367: https://github.com/pydata/pandas/issues/2367
+.. _GH2247: https://github.com/pydata/pandas/issues/2247
+.. _GH2328: https://github.com/pydata/pandas/issues/2328
+.. _GH2232: https://github.com/pydata/pandas/issues/2232
+.. _GH2314: https://github.com/pydata/pandas/issues/2314
+.. _GH2294: https://github.com/pydata/pandas/issues/2294
+.. _GH1803: https://github.com/pydata/pandas/issues/1803
+.. _GH2225: https://github.com/pydata/pandas/issues/2225
+.. _GH2347: https://github.com/pydata/pandas/issues/2347
+.. _GH2380: https://github.com/pydata/pandas/issues/2380
+.. _GH2331: https://github.com/pydata/pandas/issues/2331
+.. _GH2431: https://github.com/pydata/pandas/issues/2431
+.. _GH2443: https://github.com/pydata/pandas/issues/2443
+.. _GH2300: https://github.com/pydata/pandas/issues/2300
+.. _GH2374: https://github.com/pydata/pandas/issues/2374
+.. _GH2441: https://github.com/pydata/pandas/issues/2441
+.. _GH2458: https://github.com/pydata/pandas/issues/2458
+.. _GH2260: https://github.com/pydata/pandas/issues/2260
+.. _GH2262: https://github.com/pydata/pandas/issues/2262
+.. _GH2467: https://github.com/pydata/pandas/issues/2467
+.. _GH2252: https://github.com/pydata/pandas/issues/2252
+.. _GH2338: https://github.com/pydata/pandas/issues/2338
+.. _GH2318: https://github.com/pydata/pandas/issues/2318
+.. _GH2448: https://github.com/pydata/pandas/issues/2448
+.. _GH2298: https://github.com/pydata/pandas/issues/2298
+.. _GH2459: https://github.com/pydata/pandas/issues/2459
+.. _GH2474: https://github.com/pydata/pandas/issues/2474
+.. _GH2269: https://github.com/pydata/pandas/issues/2269
+.. _GH2071: https://github.com/pydata/pandas/issues/2071
+.. _GH2471: https://github.com/pydata/pandas/issues/2471
+.. _GH2133: https://github.com/pydata/pandas/issues/2133
+.. _GH2307: https://github.com/pydata/pandas/issues/2307
+.. _GH2488: https://github.com/pydata/pandas/issues/2488
+
pandas 0.9.1
============
@@ -239,88 +353,159 @@ pandas 0.9.1
**New features**
- - Can specify multiple sort orders in DataFrame/Series.sort/sort_index (#928)
- - New `top` and `bottom` options for handling NAs in rank (#1508, #2159)
- - Add `where` and `mask` functions to DataFrame (#2109, #2151)
- - Add `at_time` and `between_time` functions to DataFrame (#2149)
- - Add flexible `pow` and `rpow` methods to DataFrame (#2190)
+ - Can specify multiple sort orders in DataFrame/Series.sort/sort_index (GH928_)
+ - New `top` and `bottom` options for handling NAs in rank (GH1508_, GH2159_)
+ - Add `where` and `mask` functions to DataFrame (GH2109_, GH2151_)
+ - Add `at_time` and `between_time` functions to DataFrame (GH2149_)
+ - Add flexible `pow` and `rpow` methods to DataFrame (GH2190_)
**API Changes**
- Upsampling period index "spans" intervals. Example: annual periods
upsampled to monthly will span all months in each year
- Period.end_time will yield timestamp at last nanosecond in the interval
- (#2124, #2125, #1764)
+ (GH2124_, GH2125_, GH1764_)
- File parsers no longer coerce to float or bool for columns that have custom
- converters specified (#2184)
+ converters specified (GH2184_)
**Improvements to existing features**
- - Time rule inference for week-of-month (e.g. WOM-2FRI) rules (#2140)
+ - Time rule inference for week-of-month (e.g. WOM-2FRI) rules (GH2140_)
- Improve performance of datetime + business day offset with large number of
offset periods
- Improve HTML display of DataFrame objects with hierarchical columns
- - Enable referencing of Excel columns by their column names (#1936)
- - DataFrame.dot can accept ndarrays (#2042)
- - Support negative periods in Panel.shift (#2164)
- - Make .drop(...) work with non-unique indexes (#2101)
- - Improve performance of Series/DataFrame.diff (re: #2087)
- - Support unary ~ (__invert__) in DataFrame (#2110)
- - Turn off pandas-style tick locators and formatters (#2205)
- - DataFrame[DataFrame] uses DataFrame.where to compute masked frame (#2230)
+ - Enable referencing of Excel columns by their column names (GH1936_)
+ - DataFrame.dot can accept ndarrays (GH2042_)
+ - Support negative periods in Panel.shift (GH2164_)
+ - Make .drop(...) work with non-unique indexes (GH2101_)
+ - Improve performance of Series/DataFrame.diff (re: GH2087_)
+ - Support unary ~ (__invert__) in DataFrame (GH2110_)
+ - Turn off pandas-style tick locators and formatters (GH2205_)
+ - DataFrame[DataFrame] uses DataFrame.where to compute masked frame (GH2230_)
**Bug fixes**
- - Fix some duplicate-column DataFrame constructor issues (#2079)
- - Fix bar plot color cycle issues (#2082)
- - Fix off-center grid for stacked bar plots (#2157)
- - Fix plotting bug if inferred frequency is offset with N > 1 (#2126)
- - Implement comparisons on date offsets with fixed delta (#2078)
- - Handle inf/-inf correctly in read_* parser functions (#2041)
+ - Fix some duplicate-column DataFrame constructor issues (GH2079_)
+ - Fix bar plot color cycle issues (GH2082_)
+ - Fix off-center grid for stacked bar plots (GH2157_)
+ - Fix plotting bug if inferred frequency is offset with N > 1 (GH2126_)
+ - Implement comparisons on date offsets with fixed delta (GH2078_)
+ - Handle inf/-inf correctly in read_* parser functions (GH2041_)
- Fix matplotlib unicode interaction bug
- Make WLS r-squared match statsmodels 0.5.0 fixed value
- Fix zero-trimming DataFrame formatting bug
- - Correctly compute/box datetime64 min/max values from Series.min/max (#2083)
- - Fix unstacking edge case with unrepresented groups (#2100)
- - Fix Series.str failures when using pipe pattern '|' (#2119)
- - Fix pretty-printing of dict entries in Series, DataFrame (#2144)
- - Cast other datetime64 values to nanoseconds in DataFrame ctor (#2095)
- - Alias Timestamp.astimezone to tz_convert, so will yield Timestamp (#2060)
- - Fix timedelta64 formatting from Series (#2165, #2146)
- - Handle None values gracefully in dict passed to Panel constructor (#2075)
- - Box datetime64 values as Timestamp objects in Series/DataFrame.iget (#2148)
- - Fix Timestamp indexing bug in DatetimeIndex.insert (#2155)
- - Use index name(s) (if any) in DataFrame.to_records (#2161)
- - Don't lose index names in Panel.to_frame/DataFrame.to_panel (#2163)
- - Work around length-0 boolean indexing NumPy bug (#2096)
- - Fix partial integer indexing bug in DataFrame.xs (#2107)
- - Fix variety of cut/qcut string-bin formatting bugs (#1978, #1979)
- - Raise Exception when xs view not possible of MultiIndex'd DataFrame (#2117)
- - Fix groupby(...).first() issue with datetime64 (#2133)
- - Better floating point error robustness in some rolling_* functions (#2114)
- - Fix ewma NA handling in the middle of Series (#2128)
- - Fix numerical precision issues in diff with integer data (#2087)
- - Fix bug in MultiIndex.__getitem__ with NA values (#2008)
- - Fix DataFrame.from_records dict-arg bug when passing columns (#2179)
- - Fix Series and DataFrame.diff for integer dtypes (#2087, #2174)
- - Fix bug when taking intersection of DatetimeIndex with empty index (#2129)
- - Pass through timezone information when calling DataFrame.align (#2127)
- - Properly sort when joining on datetime64 values (#2196)
- - Fix indexing bug in which False/True were being coerced to 0/1 (#2199)
- - Many unicode formatting fixes (#2201)
+ - Correctly compute/box datetime64 min/max values from Series.min/max (GH2083_)
+ - Fix unstacking edge case with unrepresented groups (GH2100_)
+ - Fix Series.str failures when using pipe pattern '|' (GH2119_)
+ - Fix pretty-printing of dict entries in Series, DataFrame (GH2144_)
+ - Cast other datetime64 values to nanoseconds in DataFrame ctor (GH2095_)
+ - Alias Timestamp.astimezone to tz_convert, so will yield Timestamp (GH2060_)
+ - Fix timedelta64 formatting from Series (GH2165_, GH2146_)
+ - Handle None values gracefully in dict passed to Panel constructor (GH2075_)
+ - Box datetime64 values as Timestamp objects in Series/DataFrame.iget (GH2148_)
+ - Fix Timestamp indexing bug in DatetimeIndex.insert (GH2155_)
+ - Use index name(s) (if any) in DataFrame.to_records (GH2161_)
+ - Don't lose index names in Panel.to_frame/DataFrame.to_panel (GH2163_)
+ - Work around length-0 boolean indexing NumPy bug (GH2096_)
+ - Fix partial integer indexing bug in DataFrame.xs (GH2107_)
+ - Fix variety of cut/qcut string-bin formatting bugs (GH1978_, GH1979_)
+ - Raise Exception when xs view not possible of MultiIndex'd DataFrame (GH2117_)
+ - Fix groupby(...).first() issue with datetime64 (GH2133_)
+ - Better floating point error robustness in some rolling_* functions (GH2114_)
+ - Fix ewma NA handling in the middle of Series (GH2128_)
+ - Fix numerical precision issues in diff with integer data (GH2087_)
+ - Fix bug in MultiIndex.__getitem__ with NA values (GH2008_)
+ - Fix DataFrame.from_records dict-arg bug when passing columns (GH2179_)
+ - Fix Series and DataFrame.diff for integer dtypes (GH2087_, GH2174_)
+ - Fix bug when taking intersection of DatetimeIndex with empty index (GH2129_)
+ - Pass through timezone information when calling DataFrame.align (GH2127_)
+ - Properly sort when joining on datetime64 values (GH2196_)
+ - Fix indexing bug in which False/True were being coerced to 0/1 (GH2199_)
+ - Many unicode formatting fixes (GH2201_)
- Fix improper MultiIndex conversion issue when assigning
- e.g. DataFrame.index (#2200)
- - Fix conversion of mixed-type DataFrame to ndarray with dup columns (#2236)
- - Fix duplicate columns issue (#2218, #2219)
- - Fix SparseSeries.__pow__ issue with NA input (#2220)
- - Fix icol with integer sequence failure (#2228)
- - Fixed resampling tz-aware time series issue (#2245)
- - SparseDataFrame.icol was not returning SparseSeries (#2227, #2229)
- - Enable ExcelWriter to handle PeriodIndex (#2240)
- - Fix issue constructing DataFrame from empty Series with name (#2234)
- - Use console-width detection in interactive sessions only (#1610)
- - Fix parallel_coordinates legend bug with mpl 1.2.0 (#2237)
- - Make tz_localize work in corner case of empty Series (#2248)
+ e.g. DataFrame.index (GH2200_)
+ - Fix conversion of mixed-type DataFrame to ndarray with dup columns (GH2236_)
+ - Fix duplicate columns issue (GH2218_, GH2219_)
+ - Fix SparseSeries.__pow__ issue with NA input (GH2220_)
+ - Fix icol with integer sequence failure (GH2228_)
+ - Fixed resampling tz-aware time series issue (GH2245_)
+ - SparseDataFrame.icol was not returning SparseSeries (GH2227_, GH2229_)
+ - Enable ExcelWriter to handle PeriodIndex (GH2240_)
+ - Fix issue constructing DataFrame from empty Series with name (GH2234_)
+ - Use console-width detection in interactive sessions only (GH1610_)
+ - Fix parallel_coordinates legend bug with mpl 1.2.0 (GH2237_)
+ - Make tz_localize work in corner case of empty Series (GH2248_)
+
+.. _GH928: https://github.com/pydata/pandas/issues/928
+.. _GH1508: https://github.com/pydata/pandas/issues/1508
+.. _GH2159: https://github.com/pydata/pandas/issues/2159
+.. _GH2109: https://github.com/pydata/pandas/issues/2109
+.. _GH2151: https://github.com/pydata/pandas/issues/2151
+.. _GH2149: https://github.com/pydata/pandas/issues/2149
+.. _GH2190: https://github.com/pydata/pandas/issues/2190
+.. _GH2124: https://github.com/pydata/pandas/issues/2124
+.. _GH2125: https://github.com/pydata/pandas/issues/2125
+.. _GH1764: https://github.com/pydata/pandas/issues/1764
+.. _GH2184: https://github.com/pydata/pandas/issues/2184
+.. _GH2140: https://github.com/pydata/pandas/issues/2140
+.. _GH1936: https://github.com/pydata/pandas/issues/1936
+.. _GH2042: https://github.com/pydata/pandas/issues/2042
+.. _GH2164: https://github.com/pydata/pandas/issues/2164
+.. _GH2101: https://github.com/pydata/pandas/issues/2101
+.. _GH2087: https://github.com/pydata/pandas/issues/2087
+.. _GH2110: https://github.com/pydata/pandas/issues/2110
+.. _GH2205: https://github.com/pydata/pandas/issues/2205
+.. _GH2230: https://github.com/pydata/pandas/issues/2230
+.. _GH2079: https://github.com/pydata/pandas/issues/2079
+.. _GH2082: https://github.com/pydata/pandas/issues/2082
+.. _GH2157: https://github.com/pydata/pandas/issues/2157
+.. _GH2126: https://github.com/pydata/pandas/issues/2126
+.. _GH2078: https://github.com/pydata/pandas/issues/2078
+.. _GH2041: https://github.com/pydata/pandas/issues/2041
+.. _GH2083: https://github.com/pydata/pandas/issues/2083
+.. _GH2100: https://github.com/pydata/pandas/issues/2100
+.. _GH2119: https://github.com/pydata/pandas/issues/2119
+.. _GH2144: https://github.com/pydata/pandas/issues/2144
+.. _GH2095: https://github.com/pydata/pandas/issues/2095
+.. _GH2060: https://github.com/pydata/pandas/issues/2060
+.. _GH2165: https://github.com/pydata/pandas/issues/2165
+.. _GH2146: https://github.com/pydata/pandas/issues/2146
+.. _GH2075: https://github.com/pydata/pandas/issues/2075
+.. _GH2148: https://github.com/pydata/pandas/issues/2148
+.. _GH2155: https://github.com/pydata/pandas/issues/2155
+.. _GH2161: https://github.com/pydata/pandas/issues/2161
+.. _GH2163: https://github.com/pydata/pandas/issues/2163
+.. _GH2096: https://github.com/pydata/pandas/issues/2096
+.. _GH2107: https://github.com/pydata/pandas/issues/2107
+.. _GH1978: https://github.com/pydata/pandas/issues/1978
+.. _GH1979: https://github.com/pydata/pandas/issues/1979
+.. _GH2117: https://github.com/pydata/pandas/issues/2117
+.. _GH2133: https://github.com/pydata/pandas/issues/2133
+.. _GH2114: https://github.com/pydata/pandas/issues/2114
+.. _GH2128: https://github.com/pydata/pandas/issues/2128
+.. _GH2008: https://github.com/pydata/pandas/issues/2008
+.. _GH2179: https://github.com/pydata/pandas/issues/2179
+.. _GH2174: https://github.com/pydata/pandas/issues/2174
+.. _GH2129: https://github.com/pydata/pandas/issues/2129
+.. _GH2127: https://github.com/pydata/pandas/issues/2127
+.. _GH2196: https://github.com/pydata/pandas/issues/2196
+.. _GH2199: https://github.com/pydata/pandas/issues/2199
+.. _GH2201: https://github.com/pydata/pandas/issues/2201
+.. _GH2200: https://github.com/pydata/pandas/issues/2200
+.. _GH2236: https://github.com/pydata/pandas/issues/2236
+.. _GH2218: https://github.com/pydata/pandas/issues/2218
+.. _GH2219: https://github.com/pydata/pandas/issues/2219
+.. _GH2220: https://github.com/pydata/pandas/issues/2220
+.. _GH2228: https://github.com/pydata/pandas/issues/2228
+.. _GH2245: https://github.com/pydata/pandas/issues/2245
+.. _GH2227: https://github.com/pydata/pandas/issues/2227
+.. _GH2229: https://github.com/pydata/pandas/issues/2229
+.. _GH2240: https://github.com/pydata/pandas/issues/2240
+.. _GH2234: https://github.com/pydata/pandas/issues/2234
+.. _GH1610: https://github.com/pydata/pandas/issues/1610
+.. _GH2237: https://github.com/pydata/pandas/issues/2237
+.. _GH2248: https://github.com/pydata/pandas/issues/2248
+
pandas 0.9.0
============
@@ -329,224 +514,383 @@ pandas 0.9.0
**New features**
- - Add ``str.encode`` and ``str.decode`` to Series (#1706)
- - Add `to_latex` method to DataFrame (#1735)
- - Add convenient expanding window equivalents of all rolling_* ops (#1785)
+ - Add ``str.encode`` and ``str.decode`` to Series (GH1706_)
+ - Add `to_latex` method to DataFrame (GH1735_)
+ - Add convenient expanding window equivalents of all rolling_* ops (GH1785_)
- Add Options class to pandas.io.data for fetching options data from Yahoo!
- Finance (#1748, #1739)
+ Finance (GH1748_, GH1739_)
- Recognize and convert more boolean values in file parsing (Yes, No, TRUE,
- FALSE, variants thereof) (#1691, #1295)
- - Add Panel.update method, analogous to DataFrame.update (#1999, #1988)
+ FALSE, variants thereof) (GH1691_, GH1295_)
+ - Add Panel.update method, analogous to DataFrame.update (GH1999_, GH1988_)
**Improvements to existing features**
- - Proper handling of NA values in merge operations (#1990)
- - Add ``flags`` option for ``re.compile`` in some Series.str methods (#1659)
- - Parsing of UTC date strings in read_* functions (#1693)
- - Handle generator input to Series (#1679)
- - Add `na_action='ignore'` to Series.map to quietly propagate NAs (#1661)
- - Add args/kwds options to Series.apply (#1829)
- - Add inplace option to Series/DataFrame.reset_index (#1797)
+ - Proper handling of NA values in merge operations (GH1990_)
+ - Add ``flags`` option for ``re.compile`` in some Series.str methods (GH1659_)
+ - Parsing of UTC date strings in read_* functions (GH1693_)
+ - Handle generator input to Series (GH1679_)
+ - Add `na_action='ignore'` to Series.map to quietly propagate NAs (GH1661_)
+ - Add args/kwds options to Series.apply (GH1829_)
+ - Add inplace option to Series/DataFrame.reset_index (GH1797_)
- Add ``level`` parameter to ``Series.reset_index``
- - Add quoting option for DataFrame.to_csv (#1902)
- - Indicate long column value truncation in DataFrame output with ... (#1854)
- - DataFrame.dot will not do data alignment, and also work with Series (#1915)
+ - Add quoting option for DataFrame.to_csv (GH1902_)
+ - Indicate long column value truncation in DataFrame output with ... (GH1854_)
+ - DataFrame.dot will not do data alignment, and also work with Series (GH1915_)
- Add ``na`` option for missing data handling in some vectorized string
- methods (#1689)
+ methods (GH1689_)
- If index_label=False in DataFrame.to_csv, do not print fields/commas in the
- text output. Results in easier importing into R (#1583)
+ text output. Results in easier importing into R (GH1583_)
- Can pass tuple/list of axes to DataFrame.dropna to simplify repeated calls
- (dropping both columns and rows) (#924)
+ (dropping both columns and rows) (GH924_)
- Improve DataFrame.to_html output for hierarchically-indexed rows (do not
- repeat levels) (#1929)
- - TimeSeries.between_time can now select times across midnight (#1871)
- - Enable `skip_footer` parameter in `ExcelFile.parse` (#1843)
+ repeat levels) (GH1929_)
+ - TimeSeries.between_time can now select times across midnight (GH1871_)
+ - Enable `skip_footer` parameter in `ExcelFile.parse` (GH1843_)
**API Changes**
- Change default header names in read_* functions to more Pythonic X0, X1,
- etc. instead of X.1, X.2. (#2000)
+ etc. instead of X.1, X.2. (GH2000_)
- Deprecated ``day_of_year`` API removed from PeriodIndex, use ``dayofyear``
- (#1723)
+ (GH1723_)
- Don't modify NumPy suppress printoption at import time
- The internal HDF5 data arrangement for DataFrames has been
- transposed. Legacy files will still be readable by HDFStore (#1834, #1824)
+ transposed. Legacy files will still be readable by HDFStore (GH1834_, GH1824_)
- Legacy cruft removed: pandas.stats.misc.quantileTS
- - Use ISO8601 format for Period repr: monthly, daily, and on down (#1776)
+ - Use ISO8601 format for Period repr: monthly, daily, and on down (GH1776_)
- Empty DataFrame columns are now created as object dtype. This will prevent
a class of TypeErrors that was occurring in code where the dtype of a
column would depend on the presence of data or not (e.g. a SQL query having
- results) (#1783)
+ results) (GH1783_)
- Setting parts of DataFrame/Panel using ix now aligns input Series/DataFrame
- (#1630)
+ (GH1630_)
- `first` and `last` methods in `GroupBy` no longer drop non-numeric columns
- (#1809)
+ (GH1809_)
- Resolved inconsistencies in specifying custom NA values in text parser.
`na_values` of type dict no longer override default NAs unless
- `keep_default_na` is set to false explicitly (#1657)
+ `keep_default_na` is set to false explicitly (GH1657_)
- Enable `skipfooter` parameter in text parsers as an alias for `skip_footer`
**Bug fixes**
- Perform arithmetic column-by-column in mixed-type DataFrame to avoid type
- upcasting issues. Caused downstream DataFrame.diff bug (#1896)
+ upcasting issues. Caused downstream DataFrame.diff bug (GH1896_)
- Fix matplotlib auto-color assignment when no custom spectrum passed. Also
- respect passed color keyword argument (#1711)
- - Fix resampling logical error with closed='left' (#1726)
- - Fix critical DatetimeIndex.union bugs (#1730, #1719, #1745, #1702, #1753)
- - Fix critical DatetimeIndex.intersection bug with unanchored offsets (#1708)
- - Fix MM-YYYY time series indexing case (#1672)
+ respect passed color keyword argument (GH1711_)
+ - Fix resampling logical error with closed='left' (GH1726_)
+ - Fix critical DatetimeIndex.union bugs (GH1730_, GH1719_, GH1745_, GH1702_, GH1753_)
+ - Fix critical DatetimeIndex.intersection bug with unanchored offsets (GH1708_)
+ - Fix MM-YYYY time series indexing case (GH1672_)
- Fix case where Categorical group key was not being passed into index in
- GroupBy result (#1701)
- - Handle Ellipsis in Series.__getitem__/__setitem__ (#1721)
+ GroupBy result (GH1701_)
+ - Handle Ellipsis in Series.__getitem__/__setitem__ (GH1721_)
- Fix some bugs with handling datetime64 scalars of other units in NumPy 1.6
- and 1.7 (#1717)
- - Fix performance issue in MultiIndex.format (#1746)
- - Fixed GroupBy bugs interacting with DatetimeIndex asof / map methods (#1677)
- - Handle factors with NAs in pandas.rpy (#1615)
- - Fix statsmodels import in pandas.stats.var (#1734)
- - Fix DataFrame repr/info summary with non-unique columns (#1700)
- - Fix Series.iget_value for non-unique indexes (#1694)
- - Don't lose tzinfo when passing DatetimeIndex as DataFrame column (#1682)
+ and 1.7 (GH1717_)
+ - Fix performance issue in MultiIndex.format (GH1746_)
+ - Fixed GroupBy bugs interacting with DatetimeIndex asof / map methods (GH1677_)
+ - Handle factors with NAs in pandas.rpy (GH1615_)
+ - Fix statsmodels import in pandas.stats.var (GH1734_)
+ - Fix DataFrame repr/info summary with non-unique columns (GH1700_)
+ - Fix Series.iget_value for non-unique indexes (GH1694_)
+ - Don't lose tzinfo when passing DatetimeIndex as DataFrame column (GH1682_)
- Fix tz conversion with time zones that haven't had any DST transitions since
- first date in the array (#1673)
- - Fix field access with UTC->local conversion on unsorted arrays (#1756)
- - Fix isnull handling of array-like (list) inputs (#1755)
- - Fix regression in handling of Series in Series constructor (#1671)
- - Fix comparison of Int64Index with DatetimeIndex (#1681)
- - Fix min_periods handling in new rolling_max/min at array start (#1695)
+ first date in the array (GH1673_)
+ - Fix field access with UTC->local conversion on unsorted arrays (GH1756_)
+ - Fix isnull handling of array-like (list) inputs (GH1755_)
+ - Fix regression in handling of Series in Series constructor (GH1671_)
+ - Fix comparison of Int64Index with DatetimeIndex (GH1681_)
+ - Fix min_periods handling in new rolling_max/min at array start (GH1695_)
- Fix errors with how='median' and generic NumPy resampling in some cases
- caused by SeriesBinGrouper (#1648, #1688)
- - When grouping by level, exclude unobserved levels (#1697)
- - Don't lose tzinfo in DatetimeIndex when shifting by different offset (#1683)
- - Hack to support storing data with a zero-length axis in HDFStore (#1707)
- - Fix DatetimeIndex tz-aware range generation issue (#1674)
- - Fix method='time' interpolation with intraday data (#1698)
- - Don't plot all-NA DataFrame columns as zeros (#1696)
- - Fix bug in scatter_plot with by option (#1716)
- - Fix performance problem in infer_freq with lots of non-unique stamps (#1686)
- - Fix handling of PeriodIndex as argument to create MultiIndex (#1705)
- - Fix re: unicode MultiIndex level names in Series/DataFrame repr (#1736)
- - Handle PeriodIndex in to_datetime instance method (#1703)
- - Support StaticTzInfo in DatetimeIndex infrastructure (#1692)
- - Allow MultiIndex setops with length-0 other type indexes (#1727)
- - Fix handling of DatetimeIndex in DataFrame.to_records (#1720)
- - Fix handling of general objects in isnull on which bool(...) fails (#1749)
- - Fix .ix indexing with MultiIndex ambiguity (#1678)
- - Fix .ix setting logic error with non-unique MultiIndex (#1750)
+ caused by SeriesBinGrouper (GH1648_, GH1688_)
+ - When grouping by level, exclude unobserved levels (GH1697_)
+ - Don't lose tzinfo in DatetimeIndex when shifting by different offset (GH1683_)
+ - Hack to support storing data with a zero-length axis in HDFStore (GH1707_)
+ - Fix DatetimeIndex tz-aware range generation issue (GH1674_)
+ - Fix method='time' interpolation with intraday data (GH1698_)
+ - Don't plot all-NA DataFrame columns as zeros (GH1696_)
+ - Fix bug in scatter_plot with by option (GH1716_)
+ - Fix performance problem in infer_freq with lots of non-unique stamps (GH1686_)
+ - Fix handling of PeriodIndex as argument to create MultiIndex (GH1705_)
+ - Fix re: unicode MultiIndex level names in Series/DataFrame repr (GH1736_)
+ - Handle PeriodIndex in to_datetime instance method (GH1703_)
+ - Support StaticTzInfo in DatetimeIndex infrastructure (GH1692_)
+ - Allow MultiIndex setops with length-0 other type indexes (GH1727_)
+ - Fix handling of DatetimeIndex in DataFrame.to_records (GH1720_)
+ - Fix handling of general objects in isnull on which bool(...) fails (GH1749_)
+ - Fix .ix indexing with MultiIndex ambiguity (GH1678_)
+ - Fix .ix setting logic error with non-unique MultiIndex (GH1750_)
- Basic indexing now works on MultiIndex with > 1000000 elements, regression
- from earlier version of pandas (#1757)
- - Handle non-float64 dtypes in fast DataFrame.corr/cov code paths (#1761)
- - Fix DatetimeIndex.isin to function properly (#1763)
+ from earlier version of pandas (GH1757_)
+ - Handle non-float64 dtypes in fast DataFrame.corr/cov code paths (GH1761_)
+ - Fix DatetimeIndex.isin to function properly (GH1763_)
- Fix conversion of array of tz-aware datetime.datetime to DatetimeIndex with
- right time zone (#1777)
- - Fix DST issues with generating ancxhored date ranges (#1778)
- - Fix issue calling sort on result of Series.unique (#1807)
+ right time zone (GH1777_)
+ - Fix DST issues with generating ancxhored date ranges (GH1778_)
+ - Fix issue calling sort on result of Series.unique (GH1807_)
- Fix numerical issue leading to square root of negative number in
- rolling_std (#1840)
- - Let Series.str.split accept no arguments (like str.split) (#1859)
- - Allow user to have dateutil 2.1 installed on a Python 2 system (#1851)
- - Catch ImportError less aggressively in pandas/__init__.py (#1845)
- - Fix pip source installation bug when installing from GitHub (#1805)
- - Fix error when window size > array size in rolling_apply (#1850)
+ rolling_std (GH1840_)
+ - Let Series.str.split accept no arguments (like str.split) (GH1859_)
+ - Allow user to have dateutil 2.1 installed on a Python 2 system (GH1851_)
+ - Catch ImportError less aggressively in pandas/__init__.py (GH1845_)
+ - Fix pip source installation bug when installing from GitHub (GH1805_)
+ - Fix error when window size > array size in rolling_apply (GH1850_)
- Fix pip source installation issues via SSH from GitHub
- - Fix OLS.summary when column is a tuple (#1837)
+ - Fix OLS.summary when column is a tuple (GH1837_)
- Fix bug in __doc__ patching when -OO passed to interpreter
- (#1792 #1741 #1774)
- - Fix unicode console encoding issue in IPython notebook (#1782, #1768)
- - Fix unicode formatting issue with Series.name (#1782)
- - Fix bug in DataFrame.duplicated with datetime64 columns (#1833)
+ (GH1792_ GH1741_ GH1774_)
+ - Fix unicode console encoding issue in IPython notebook (GH1782_, GH1768_)
+ - Fix unicode formatting issue with Series.name (GH1782_)
+ - Fix bug in DataFrame.duplicated with datetime64 columns (GH1833_)
- Fix bug in Panel internals resulting in error when doing fillna after
- truncate not changing size of panel (#1823)
+ truncate not changing size of panel (GH1823_)
- Prevent segfault due to MultiIndex not being supported in HDFStore table
- format (#1848)
- - Fix UnboundLocalError in Panel.__setitem__ and add better error (#1826)
+ format (GH1848_)
+ - Fix UnboundLocalError in Panel.__setitem__ and add better error (GH1826_)
- Fix to_csv issues with list of string entries. Isnull works on list of
- strings now too (#1791)
+ strings now too (GH1791_)
- Fix Timestamp comparisons with datetime values outside the nanosecond range
(1677-2262)
- Revert to prior behavior of normalize_date with datetime.date objects
(return datetime)
- Fix broken interaction between np.nansum and Series.any/all
- - Fix bug with multiple column date parsers (#1866)
+ - Fix bug with multiple column date parsers (GH1866_)
- DatetimeIndex.union(Int64Index) was broken
- - Make plot x vs y interface consistent with integer indexing (#1842)
- - set_index inplace modified data even if unique check fails (#1831)
- - Only use Q-OCT/NOV/DEC in quarterly frequency inference (#1789)
- - Upcast to dtype=object when unstacking boolean DataFrame (#1820)
- - Fix float64/float32 merging bug (#1849)
- - Fixes to Period.start_time for non-daily frequencies (#1857)
- - Fix failure when converter used on index_col in read_csv (#1835)
- - Implement PeriodIndex.append so that pandas.concat works correctly (#1815)
+ - Make plot x vs y interface consistent with integer indexing (GH1842_)
+ - set_index inplace modified data even if unique check fails (GH1831_)
+ - Only use Q-OCT/NOV/DEC in quarterly frequency inference (GH1789_)
+ - Upcast to dtype=object when unstacking boolean DataFrame (GH1820_)
+ - Fix float64/float32 merging bug (GH1849_)
+ - Fixes to Period.start_time for non-daily frequencies (GH1857_)
+ - Fix failure when converter used on index_col in read_csv (GH1835_)
+ - Implement PeriodIndex.append so that pandas.concat works correctly (GH1815_)
- Avoid Cython out-of-bounds access causing segfault sometimes in pad_2d,
backfill_2d
- Fix resampling error with intraday times and anchored target time (like
- AS-DEC) (#1772)
- - Fix .ix indexing bugs with mixed-integer indexes (#1799)
- - Respect passed color keyword argument in Series.plot (#1890)
+ AS-DEC) (GH1772_)
+ - Fix .ix indexing bugs with mixed-integer indexes (GH1799_)
+ - Respect passed color keyword argument in Series.plot (GH1890_)
- Fix rolling_min/max when the window is larger than the size of the input
- array. Check other malformed inputs (#1899, #1897)
+ array. Check other malformed inputs (GH1899_, GH1897_)
- Rolling variance / standard deviation with only a single observation in
- window (#1884)
- - Fix unicode sheet name failure in to_excel (#1828)
- - Override DatetimeIndex.min/max to return Timestamp objects (#1895)
- - Fix column name formatting issue in length-truncated column (#1906)
+ window (GH1884_)
+ - Fix unicode sheet name failure in to_excel (GH1828_)
+ - Override DatetimeIndex.min/max to return Timestamp objects (GH1895_)
+ - Fix column name formatting issue in length-truncated column (GH1906_)
- Fix broken handling of copying Index metadata to new instances created by
view(...) calls inside the NumPy infrastructure
- Support datetime.date again in DateOffset.rollback/rollforward
- - Raise Exception if set passed to Series constructor (#1913)
- - Add TypeError when appending HDFStore table w/ wrong index type (#1881)
- - Don't raise exception on empty inputs in EW functions (e.g. ewma) (#1900)
- - Make asof work correctly with PeriodIndex (#1883)
+ - Raise Exception if set passed to Series constructor (GH1913_)
+ - Add TypeError when appending HDFStore table w/ wrong index type (GH1881_)
+ - Don't raise exception on empty inputs in EW functions (e.g. ewma) (GH1900_)
+ - Make asof work correctly with PeriodIndex (GH1883_)
- Fix extlinks in doc build
- - Fill boolean DataFrame with NaN when calling shift (#1814)
+ - Fill boolean DataFrame with NaN when calling shift (GH1814_)
- Fix setuptools bug causing pip not to Cythonize .pyx files sometimes
- - Fix negative integer indexing regression in .ix from 0.7.x (#1888)
+ - Fix negative integer indexing regression in .ix from 0.7.x (GH1888_)
- Fix error while retrieving timezone and utc offset from subclasses of
- datetime.tzinfo without .zone and ._utcoffset attributes (#1922)
- - Fix DataFrame formatting of small, non-zero FP numbers (#1911)
- - Various fixes by upcasting of date -> datetime (#1395)
+ datetime.tzinfo without .zone and ._utcoffset attributes (GH1922_)
+ - Fix DataFrame formatting of small, non-zero FP numbers (GH1911_)
+ - Various fixes by upcasting of date -> datetime (GH1395_)
- Raise better exception when passing multiple functions with the same name,
such as lambdas, to GroupBy.aggregate
- - Fix DataFrame.apply with axis=1 on a non-unique index (#1878)
- - Proper handling of Index subclasses in pandas.unique (#1759)
- - Set index names in DataFrame.from_records (#1744)
+ - Fix DataFrame.apply with axis=1 on a non-unique index (GH1878_)
+ - Proper handling of Index subclasses in pandas.unique (GH1759_)
+ - Set index names in DataFrame.from_records (GH1744_)
- Fix time series indexing error with duplicates, under and over hash table
- size cutoff (#1821)
+ size cutoff (GH1821_)
- Handle list keys in addition to tuples in DataFrame.xs when
- partial-indexing a hierarchically-indexed DataFrame (#1796)
+ partial-indexing a hierarchically-indexed DataFrame (GH1796_)
- Support multiple column selection in DataFrame.__getitem__ with duplicate
- columns (#1943)
+ columns (GH1943_)
- Fix time zone localization bug causing improper fields (e.g. hours) in time
- zones that have not had a UTC transition in a long time (#1946)
+ zones that have not had a UTC transition in a long time (GH1946_)
- Fix errors when parsing and working with with fixed offset timezones
- (#1922, #1928)
+ (GH1922_, GH1928_)
- Fix text parser bug when handling UTC datetime objects generated by
- dateutil (#1693)
+ dateutil (GH1693_)
- Fix plotting bug when 'B' is the inferred frequency but index actually
- contains weekends (#1668, #1669)
- - Fix plot styling bugs (#1666, #1665, #1658)
- - Fix plotting bug with index/columns with unicode (#1685)
+ contains weekends (GH1668_, GH1669_)
+ - Fix plot styling bugs (GH1666_, GH1665_, GH1658_)
+ - Fix plotting bug with index/columns with unicode (GH1685_)
- Fix DataFrame constructor bug when passed Series with datetime64 dtype
- in a dict (#1680)
+ in a dict (GH1680_)
- Fixed regression in generating DatetimeIndex using timezone aware
- datetime.datetime (#1676)
+ datetime.datetime (GH1676_)
- Fix DataFrame bug when printing concatenated DataFrames with duplicated
- columns (#1675)
+ columns (GH1675_)
- Fixed bug when plotting time series with multiple intraday frequencies
- (#1732)
+ (GH1732_)
- Fix bug in DataFrame.duplicated to enable iterables other than list-types
- as input argument (#1773)
- - Fix resample bug when passed list of lambdas as `how` argument (#1808)
- - Repr fix for MultiIndex level with all NAs (#1971)
- - Fix PeriodIndex slicing bug when slice start/end are out-of-bounds (#1977)
- - Fix read_table bug when parsing unicode (#1975)
+ as input argument (GH1773_)
+ - Fix resample bug when passed list of lambdas as `how` argument (GH1808_)
+ - Repr fix for MultiIndex level with all NAs (GH1971_)
+ - Fix PeriodIndex slicing bug when slice start/end are out-of-bounds (GH1977_)
+ - Fix read_table bug when parsing unicode (GH1975_)
- Fix BlockManager.iget bug when dealing with non-unique MultiIndex as columns
- (#1970)
- - Fix reset_index bug if both drop and level are specified (#1957)
- - Work around unsafe NumPy object->int casting with Cython function (#1987)
- - Fix datetime64 formatting bug in DataFrame.to_csv (#1993)
- - Default start date in pandas.io.data to 1/1/2000 as the docs say (#2011)
+ (GH1970_)
+ - Fix reset_index bug if both drop and level are specified (GH1957_)
+ - Work around unsafe NumPy object->int casting with Cython function (GH1987_)
+ - Fix datetime64 formatting bug in DataFrame.to_csv (GH1993_)
+ - Default start date in pandas.io.data to 1/1/2000 as the docs say (GH2011_)
+
+
+.. _GH1706: https://github.com/pydata/pandas/issues/1706
+.. _GH1735: https://github.com/pydata/pandas/issues/1735
+.. _GH1785: https://github.com/pydata/pandas/issues/1785
+.. _GH1748: https://github.com/pydata/pandas/issues/1748
+.. _GH1739: https://github.com/pydata/pandas/issues/1739
+.. _GH1691: https://github.com/pydata/pandas/issues/1691
+.. _GH1295: https://github.com/pydata/pandas/issues/1295
+.. _GH1999: https://github.com/pydata/pandas/issues/1999
+.. _GH1988: https://github.com/pydata/pandas/issues/1988
+.. _GH1990: https://github.com/pydata/pandas/issues/1990
+.. _GH1659: https://github.com/pydata/pandas/issues/1659
+.. _GH1693: https://github.com/pydata/pandas/issues/1693
+.. _GH1679: https://github.com/pydata/pandas/issues/1679
+.. _GH1661: https://github.com/pydata/pandas/issues/1661
+.. _GH1829: https://github.com/pydata/pandas/issues/1829
+.. _GH1797: https://github.com/pydata/pandas/issues/1797
+.. _GH1902: https://github.com/pydata/pandas/issues/1902
+.. _GH1854: https://github.com/pydata/pandas/issues/1854
+.. _GH1915: https://github.com/pydata/pandas/issues/1915
+.. _GH1689: https://github.com/pydata/pandas/issues/1689
+.. _GH1583: https://github.com/pydata/pandas/issues/1583
+.. _GH924: https://github.com/pydata/pandas/issues/924
+.. _GH1929: https://github.com/pydata/pandas/issues/1929
+.. _GH1871: https://github.com/pydata/pandas/issues/1871
+.. _GH1843: https://github.com/pydata/pandas/issues/1843
+.. _GH2000: https://github.com/pydata/pandas/issues/2000
+.. _GH1723: https://github.com/pydata/pandas/issues/1723
+.. _GH1834: https://github.com/pydata/pandas/issues/1834
+.. _GH1824: https://github.com/pydata/pandas/issues/1824
+.. _GH1776: https://github.com/pydata/pandas/issues/1776
+.. _GH1783: https://github.com/pydata/pandas/issues/1783
+.. _GH1630: https://github.com/pydata/pandas/issues/1630
+.. _GH1809: https://github.com/pydata/pandas/issues/1809
+.. _GH1657: https://github.com/pydata/pandas/issues/1657
+.. _GH1896: https://github.com/pydata/pandas/issues/1896
+.. _GH1711: https://github.com/pydata/pandas/issues/1711
+.. _GH1726: https://github.com/pydata/pandas/issues/1726
+.. _GH1730: https://github.com/pydata/pandas/issues/1730
+.. _GH1719: https://github.com/pydata/pandas/issues/1719
+.. _GH1745: https://github.com/pydata/pandas/issues/1745
+.. _GH1702: https://github.com/pydata/pandas/issues/1702
+.. _GH1753: https://github.com/pydata/pandas/issues/1753
+.. _GH1708: https://github.com/pydata/pandas/issues/1708
+.. _GH1672: https://github.com/pydata/pandas/issues/1672
+.. _GH1701: https://github.com/pydata/pandas/issues/1701
+.. _GH1721: https://github.com/pydata/pandas/issues/1721
+.. _GH1717: https://github.com/pydata/pandas/issues/1717
+.. _GH1746: https://github.com/pydata/pandas/issues/1746
+.. _GH1677: https://github.com/pydata/pandas/issues/1677
+.. _GH1615: https://github.com/pydata/pandas/issues/1615
+.. _GH1734: https://github.com/pydata/pandas/issues/1734
+.. _GH1700: https://github.com/pydata/pandas/issues/1700
+.. _GH1694: https://github.com/pydata/pandas/issues/1694
+.. _GH1682: https://github.com/pydata/pandas/issues/1682
+.. _GH1673: https://github.com/pydata/pandas/issues/1673
+.. _GH1756: https://github.com/pydata/pandas/issues/1756
+.. _GH1755: https://github.com/pydata/pandas/issues/1755
+.. _GH1671: https://github.com/pydata/pandas/issues/1671
+.. _GH1681: https://github.com/pydata/pandas/issues/1681
+.. _GH1695: https://github.com/pydata/pandas/issues/1695
+.. _GH1648: https://github.com/pydata/pandas/issues/1648
+.. _GH1688: https://github.com/pydata/pandas/issues/1688
+.. _GH1697: https://github.com/pydata/pandas/issues/1697
+.. _GH1683: https://github.com/pydata/pandas/issues/1683
+.. _GH1707: https://github.com/pydata/pandas/issues/1707
+.. _GH1674: https://github.com/pydata/pandas/issues/1674
+.. _GH1698: https://github.com/pydata/pandas/issues/1698
+.. _GH1696: https://github.com/pydata/pandas/issues/1696
+.. _GH1716: https://github.com/pydata/pandas/issues/1716
+.. _GH1686: https://github.com/pydata/pandas/issues/1686
+.. _GH1705: https://github.com/pydata/pandas/issues/1705
+.. _GH1736: https://github.com/pydata/pandas/issues/1736
+.. _GH1703: https://github.com/pydata/pandas/issues/1703
+.. _GH1692: https://github.com/pydata/pandas/issues/1692
+.. _GH1727: https://github.com/pydata/pandas/issues/1727
+.. _GH1720: https://github.com/pydata/pandas/issues/1720
+.. _GH1749: https://github.com/pydata/pandas/issues/1749
+.. _GH1678: https://github.com/pydata/pandas/issues/1678
+.. _GH1750: https://github.com/pydata/pandas/issues/1750
+.. _GH1757: https://github.com/pydata/pandas/issues/1757
+.. _GH1761: https://github.com/pydata/pandas/issues/1761
+.. _GH1763: https://github.com/pydata/pandas/issues/1763
+.. _GH1777: https://github.com/pydata/pandas/issues/1777
+.. _GH1778: https://github.com/pydata/pandas/issues/1778
+.. _GH1807: https://github.com/pydata/pandas/issues/1807
+.. _GH1840: https://github.com/pydata/pandas/issues/1840
+.. _GH1859: https://github.com/pydata/pandas/issues/1859
+.. _GH1851: https://github.com/pydata/pandas/issues/1851
+.. _GH1845: https://github.com/pydata/pandas/issues/1845
+.. _GH1805: https://github.com/pydata/pandas/issues/1805
+.. _GH1850: https://github.com/pydata/pandas/issues/1850
+.. _GH1837: https://github.com/pydata/pandas/issues/1837
+.. _GH1792: https://github.com/pydata/pandas/issues/1792
+.. _GH1741: https://github.com/pydata/pandas/issues/1741
+.. _GH1774: https://github.com/pydata/pandas/issues/1774
+.. _GH1782: https://github.com/pydata/pandas/issues/1782
+.. _GH1768: https://github.com/pydata/pandas/issues/1768
+.. _GH1833: https://github.com/pydata/pandas/issues/1833
+.. _GH1823: https://github.com/pydata/pandas/issues/1823
+.. _GH1848: https://github.com/pydata/pandas/issues/1848
+.. _GH1826: https://github.com/pydata/pandas/issues/1826
+.. _GH1791: https://github.com/pydata/pandas/issues/1791
+.. _GH1866: https://github.com/pydata/pandas/issues/1866
+.. _GH1842: https://github.com/pydata/pandas/issues/1842
+.. _GH1831: https://github.com/pydata/pandas/issues/1831
+.. _GH1789: https://github.com/pydata/pandas/issues/1789
+.. _GH1820: https://github.com/pydata/pandas/issues/1820
+.. _GH1849: https://github.com/pydata/pandas/issues/1849
+.. _GH1857: https://github.com/pydata/pandas/issues/1857
+.. _GH1835: https://github.com/pydata/pandas/issues/1835
+.. _GH1815: https://github.com/pydata/pandas/issues/1815
+.. _GH1772: https://github.com/pydata/pandas/issues/1772
+.. _GH1799: https://github.com/pydata/pandas/issues/1799
+.. _GH1890: https://github.com/pydata/pandas/issues/1890
+.. _GH1899: https://github.com/pydata/pandas/issues/1899
+.. _GH1897: https://github.com/pydata/pandas/issues/1897
+.. _GH1884: https://github.com/pydata/pandas/issues/1884
+.. _GH1828: https://github.com/pydata/pandas/issues/1828
+.. _GH1895: https://github.com/pydata/pandas/issues/1895
+.. _GH1906: https://github.com/pydata/pandas/issues/1906
+.. _GH1913: https://github.com/pydata/pandas/issues/1913
+.. _GH1881: https://github.com/pydata/pandas/issues/1881
+.. _GH1900: https://github.com/pydata/pandas/issues/1900
+.. _GH1883: https://github.com/pydata/pandas/issues/1883
+.. _GH1814: https://github.com/pydata/pandas/issues/1814
+.. _GH1888: https://github.com/pydata/pandas/issues/1888
+.. _GH1922: https://github.com/pydata/pandas/issues/1922
+.. _GH1911: https://github.com/pydata/pandas/issues/1911
+.. _GH1395: https://github.com/pydata/pandas/issues/1395
+.. _GH1878: https://github.com/pydata/pandas/issues/1878
+.. _GH1759: https://github.com/pydata/pandas/issues/1759
+.. _GH1744: https://github.com/pydata/pandas/issues/1744
+.. _GH1821: https://github.com/pydata/pandas/issues/1821
+.. _GH1796: https://github.com/pydata/pandas/issues/1796
+.. _GH1943: https://github.com/pydata/pandas/issues/1943
+.. _GH1946: https://github.com/pydata/pandas/issues/1946
+.. _GH1928: https://github.com/pydata/pandas/issues/1928
+.. _GH1668: https://github.com/pydata/pandas/issues/1668
+.. _GH1669: https://github.com/pydata/pandas/issues/1669
+.. _GH1666: https://github.com/pydata/pandas/issues/1666
+.. _GH1665: https://github.com/pydata/pandas/issues/1665
+.. _GH1658: https://github.com/pydata/pandas/issues/1658
+.. _GH1685: https://github.com/pydata/pandas/issues/1685
+.. _GH1680: https://github.com/pydata/pandas/issues/1680
+.. _GH1676: https://github.com/pydata/pandas/issues/1676
+.. _GH1675: https://github.com/pydata/pandas/issues/1675
+.. _GH1732: https://github.com/pydata/pandas/issues/1732
+.. _GH1773: https://github.com/pydata/pandas/issues/1773
+.. _GH1808: https://github.com/pydata/pandas/issues/1808
+.. _GH1971: https://github.com/pydata/pandas/issues/1971
+.. _GH1977: https://github.com/pydata/pandas/issues/1977
+.. _GH1975: https://github.com/pydata/pandas/issues/1975
+.. _GH1970: https://github.com/pydata/pandas/issues/1970
+.. _GH1957: https://github.com/pydata/pandas/issues/1957
+.. _GH1987: https://github.com/pydata/pandas/issues/1987
+.. _GH1993: https://github.com/pydata/pandas/issues/1993
+.. _GH2011: https://github.com/pydata/pandas/issues/2011
pandas 0.8.1
@@ -556,86 +900,149 @@ pandas 0.8.1
**New features**
- - Add vectorized, NA-friendly string methods to Series (#1621, #620)
- - Can pass dict of per-column line styles to DataFrame.plot (#1559)
- - Selective plotting to secondary y-axis on same subplot (PR #1640)
+ - Add vectorized, NA-friendly string methods to Series (GH1621_, GH620_)
+ - Can pass dict of per-column line styles to DataFrame.plot (GH1559_)
+ - Selective plotting to secondary y-axis on same subplot (GH1640_)
- Add new ``bootstrap_plot`` plot function
- - Add new ``parallel_coordinates`` plot function (#1488)
- - Add ``radviz`` plot function (#1566)
+ - Add new ``parallel_coordinates`` plot function (GH1488_)
+ - Add ``radviz`` plot function (GH1566_)
- Add ``multi_sparse`` option to ``set_printoptions`` to modify display of
- hierarchical indexes (#1538)
- - Add ``dropna`` method to Panel (#171)
+ hierarchical indexes (GH1538_)
+ - Add ``dropna`` method to Panel (GH171_)
**Improvements to existing features**
- Use moving min/max algorithms from Bottleneck in rolling_min/rolling_max
- for > 100x speedup. (#1504, #50)
- - Add Cython group median method for >15x speedup (#1358)
+ for > 100x speedup. (GH1504_, #50)
+ - Add Cython group median method for >15x speedup (GH1358_)
- Drastically improve ``to_datetime`` performance on ISO8601 datetime strings
- (with no time zones) (#1571)
+ (with no time zones) (GH1571_)
- Improve single-key groupby performance on large data sets, accelerate use of
groupby with a Categorical variable
- Add ability to append hierarchical index levels with ``set_index`` and to
- drop single levels with ``reset_index`` (#1569, #1577)
- - Always apply passed functions in ``resample``, even if upsampling (#1596)
- - Avoid unnecessary copies in DataFrame constructor with explicit dtype (#1572)
- - Cleaner DatetimeIndex string representation with 1 or 2 elements (#1611)
+ drop single levels with ``reset_index`` (GH1569_, GH1577_)
+ - Always apply passed functions in ``resample``, even if upsampling (GH1596_)
+ - Avoid unnecessary copies in DataFrame constructor with explicit dtype (GH1572_)
+ - Cleaner DatetimeIndex string representation with 1 or 2 elements (GH1611_)
- Improve performance of array-of-Period to PeriodIndex, convert such arrays
- to PeriodIndex inside Index (#1215)
- - More informative string representation for weekly Period objects (#1503)
- - Accelerate 3-axis multi data selection from homogeneous Panel (#979)
- - Add ``adjust`` option to ewma to disable adjustment factor (#1584)
- - Add new matplotlib converters for high frequency time series plotting (#1599)
+ to PeriodIndex inside Index (GH1215_)
+ - More informative string representation for weekly Period objects (GH1503_)
+ - Accelerate 3-axis multi data selection from homogeneous Panel (GH979_)
+ - Add ``adjust`` option to ewma to disable adjustment factor (GH1584_)
+ - Add new matplotlib converters for high frequency time series plotting (GH1599_)
- Handling of tz-aware datetime.datetime objects in to_datetime; raise
- Exception unless utc=True given (#1581)
+ Exception unless utc=True given (GH1581_)
**Bug fixes**
- - Fix NA handling in DataFrame.to_panel (#1582)
+ - Fix NA handling in DataFrame.to_panel (GH1582_)
- Handle TypeError issues inside PyObject_RichCompareBool calls in khash
- (#1318)
- - Fix resampling bug to lower case daily frequency (#1588)
- - Fix kendall/spearman DataFrame.corr bug with no overlap (#1595)
- - Fix bug in DataFrame.set_index (#1592)
- - Don't ignore axes in boxplot if by specified (#1565)
- - Fix Panel .ix indexing with integers bug (#1603)
- - Fix Partial indexing bugs (years, months, ...) with PeriodIndex (#1601)
- - Fix MultiIndex console formatting issue (#1606)
+ (GH1318_)
+ - Fix resampling bug to lower case daily frequency (GH1588_)
+ - Fix kendall/spearman DataFrame.corr bug with no overlap (GH1595_)
+ - Fix bug in DataFrame.set_index (GH1592_)
+ - Don't ignore axes in boxplot if by specified (GH1565_)
+ - Fix Panel .ix indexing with integers bug (GH1603_)
+ - Fix Partial indexing bugs (years, months, ...) with PeriodIndex (GH1601_)
+ - Fix MultiIndex console formatting issue (GH1606_)
- Unordered index with duplicates doesn't yield scalar location for single
- entry (#1586)
- - Fix resampling of tz-aware time series with "anchored" freq (#1591)
- - Fix DataFrame.rank error on integer data (#1589)
- - Selection of multiple SparseDataFrame columns by list in __getitem__ (#1585)
- - Override Index.tolist for compatibility with MultiIndex (#1576)
- - Fix hierarchical summing bug with MultiIndex of length 1 (#1568)
- - Work around numpy.concatenate use/bug in Series.set_value (#1561)
- - Ensure Series/DataFrame are sorted before resampling (#1580)
- - Fix unhandled IndexError when indexing very large time series (#1562)
- - Fix DatetimeIndex intersection logic error with irregular indexes (#1551)
- - Fix unit test errors on Python 3 (#1550)
- - Fix .ix indexing bugs in duplicate DataFrame index (#1201)
- - Better handle errors with non-existing objects in HDFStore (#1254)
- - Don't copy int64 array data in DatetimeIndex when copy=False (#1624)
- - Fix resampling of conforming periods quarterly to annual (#1622)
- - Don't lose index name on resampling (#1631)
- - Support python-dateutil version 2.1 (#1637)
- - Fix broken scatter_matrix axis labeling, esp. with time series (#1625)
+ entry (GH1586_)
+ - Fix resampling of tz-aware time series with "anchored" freq (GH1591_)
+ - Fix DataFrame.rank error on integer data (GH1589_)
+ - Selection of multiple SparseDataFrame columns by list in __getitem__ (GH1585_)
+ - Override Index.tolist for compatibility with MultiIndex (GH1576_)
+ - Fix hierarchical summing bug with MultiIndex of length 1 (GH1568_)
+ - Work around numpy.concatenate use/bug in Series.set_value (GH1561_)
+ - Ensure Series/DataFrame are sorted before resampling (GH1580_)
+ - Fix unhandled IndexError when indexing very large time series (GH1562_)
+ - Fix DatetimeIndex intersection logic error with irregular indexes (GH1551_)
+ - Fix unit test errors on Python 3 (GH1550_)
+ - Fix .ix indexing bugs in duplicate DataFrame index (GH1201_)
+ - Better handle errors with non-existing objects in HDFStore (GH1254_)
+ - Don't copy int64 array data in DatetimeIndex when copy=False (GH1624_)
+ - Fix resampling of conforming periods quarterly to annual (GH1622_)
+ - Don't lose index name on resampling (GH1631_)
+ - Support python-dateutil version 2.1 (GH1637_)
+ - Fix broken scatter_matrix axis labeling, esp. with time series (GH1625_)
- Fix cases where extra keywords weren't being passed on to matplotlib from
- Series.plot (#1636)
- - Fix BusinessMonthBegin logic for dates before 1st bday of month (#1645)
+ Series.plot (GH1636_)
+ - Fix BusinessMonthBegin logic for dates before 1st bday of month (GH1645_)
- Ensure string alias converted (valid in DatetimeIndex.get_loc) in
- DataFrame.xs / __getitem__ (#1644)
- - Fix use of string alias timestamps with tz-aware time series (#1647)
- - Fix Series.max/min and Series.describe on len-0 series (#1650)
- - Handle None values in dict passed to concat (#1649)
- - Fix Series.interpolate with method='values' and DatetimeIndex (#1646)
- - Fix IndexError in left merges on a DataFrame with 0-length (#1628)
- - Fix DataFrame column width display with UTF-8 encoded characters (#1620)
+ DataFrame.xs / __getitem__ (GH1644_)
+ - Fix use of string alias timestamps with tz-aware time series (GH1647_)
+ - Fix Series.max/min and Series.describe on len-0 series (GH1650_)
+ - Handle None values in dict passed to concat (GH1649_)
+ - Fix Series.interpolate with method='values' and DatetimeIndex (GH1646_)
+ - Fix IndexError in left merges on a DataFrame with 0-length (GH1628_)
+ - Fix DataFrame column width display with UTF-8 encoded characters (GH1620_)
- Handle case in pandas.io.data.get_data_yahoo where Yahoo! returns duplicate
dates for most recent business day
- - Avoid downsampling when plotting mixed frequencies on the same subplot (#1619)
- - Fix read_csv bug when reading a single line (#1553)
- - Fix bug in C code causing monthly periods prior to December 1969 to be off (#1570)
+ - Avoid downsampling when plotting mixed frequencies on the same subplot (GH1619_)
+ - Fix read_csv bug when reading a single line (GH1553_)
+ - Fix bug in C code causing monthly periods prior to December 1969 to be off (GH1570_)
+
+.. _GH1621: https://github.com/pydata/pandas/issues/1621
+.. _GH620: https://github.com/pydata/pandas/issues/620
+.. _GH1559: https://github.com/pydata/pandas/issues/1559
+.. _GH1640: https://github.com/pydata/pandas/issues/1640
+.. _GH1488: https://github.com/pydata/pandas/issues/1488
+.. _GH1566: https://github.com/pydata/pandas/issues/1566
+.. _GH1538: https://github.com/pydata/pandas/issues/1538
+.. _GH171: https://github.com/pydata/pandas/issues/171
+.. _GH1504: https://github.com/pydata/pandas/issues/1504
+.. _GH1358: https://github.com/pydata/pandas/issues/1358
+.. _GH1571: https://github.com/pydata/pandas/issues/1571
+.. _GH1569: https://github.com/pydata/pandas/issues/1569
+.. _GH1577: https://github.com/pydata/pandas/issues/1577
+.. _GH1596: https://github.com/pydata/pandas/issues/1596
+.. _GH1572: https://github.com/pydata/pandas/issues/1572
+.. _GH1611: https://github.com/pydata/pandas/issues/1611
+.. _GH1215: https://github.com/pydata/pandas/issues/1215
+.. _GH1503: https://github.com/pydata/pandas/issues/1503
+.. _GH979: https://github.com/pydata/pandas/issues/979
+.. _GH1584: https://github.com/pydata/pandas/issues/1584
+.. _GH1599: https://github.com/pydata/pandas/issues/1599
+.. _GH1581: https://github.com/pydata/pandas/issues/1581
+.. _GH1582: https://github.com/pydata/pandas/issues/1582
+.. _GH1318: https://github.com/pydata/pandas/issues/1318
+.. _GH1588: https://github.com/pydata/pandas/issues/1588
+.. _GH1595: https://github.com/pydata/pandas/issues/1595
+.. _GH1592: https://github.com/pydata/pandas/issues/1592
+.. _GH1565: https://github.com/pydata/pandas/issues/1565
+.. _GH1603: https://github.com/pydata/pandas/issues/1603
+.. _GH1601: https://github.com/pydata/pandas/issues/1601
+.. _GH1606: https://github.com/pydata/pandas/issues/1606
+.. _GH1586: https://github.com/pydata/pandas/issues/1586
+.. _GH1591: https://github.com/pydata/pandas/issues/1591
+.. _GH1589: https://github.com/pydata/pandas/issues/1589
+.. _GH1585: https://github.com/pydata/pandas/issues/1585
+.. _GH1576: https://github.com/pydata/pandas/issues/1576
+.. _GH1568: https://github.com/pydata/pandas/issues/1568
+.. _GH1561: https://github.com/pydata/pandas/issues/1561
+.. _GH1580: https://github.com/pydata/pandas/issues/1580
+.. _GH1562: https://github.com/pydata/pandas/issues/1562
+.. _GH1551: https://github.com/pydata/pandas/issues/1551
+.. _GH1550: https://github.com/pydata/pandas/issues/1550
+.. _GH1201: https://github.com/pydata/pandas/issues/1201
+.. _GH1254: https://github.com/pydata/pandas/issues/1254
+.. _GH1624: https://github.com/pydata/pandas/issues/1624
+.. _GH1622: https://github.com/pydata/pandas/issues/1622
+.. _GH1631: https://github.com/pydata/pandas/issues/1631
+.. _GH1637: https://github.com/pydata/pandas/issues/1637
+.. _GH1625: https://github.com/pydata/pandas/issues/1625
+.. _GH1636: https://github.com/pydata/pandas/issues/1636
+.. _GH1645: https://github.com/pydata/pandas/issues/1645
+.. _GH1644: https://github.com/pydata/pandas/issues/1644
+.. _GH1647: https://github.com/pydata/pandas/issues/1647
+.. _GH1650: https://github.com/pydata/pandas/issues/1650
+.. _GH1649: https://github.com/pydata/pandas/issues/1649
+.. _GH1646: https://github.com/pydata/pandas/issues/1646
+.. _GH1628: https://github.com/pydata/pandas/issues/1628
+.. _GH1620: https://github.com/pydata/pandas/issues/1620
+.. _GH1619: https://github.com/pydata/pandas/issues/1619
+.. _GH1553: https://github.com/pydata/pandas/issues/1553
+.. _GH1570: https://github.com/pydata/pandas/issues/1570
+
pandas 0.8.0
============
@@ -651,58 +1058,58 @@ pandas 0.8.0
- High performance resampling of timestamp and period data. New `resample`
method of all pandas data structures
- New frequency names plus shortcut string aliases like '15h', '1h30min'
- - Time series string indexing shorthand (#222)
+ - Time series string indexing shorthand (GH222_)
- Add week, dayofyear array and other timestamp array-valued field accessor
functions to DatetimeIndex
- Add GroupBy.prod optimized aggregation function and 'prod' fast time series
- conversion method (#1018)
+ conversion method (GH1018_)
- Implement robust frequency inference function and `inferred_freq` attribute
- on DatetimeIndex (#391)
+ on DatetimeIndex (GH391_)
- New ``tz_convert`` and ``tz_localize`` methods in Series / DataFrame
- Convert DatetimeIndexes to UTC if time zones are different in join/setops
- (#864)
+ (GH864_)
- Add limit argument for forward/backward filling to reindex, fillna,
- etc. (#825 and others)
+ etc. (GH825_ and others)
- Add support for indexes (dates or otherwise) with duplicates and common
sense indexing/selection functionality
- - Series/DataFrame.update methods, in-place variant of combine_first (#961)
- - Add ``match`` function to API (#502)
- - Add Cython-optimized first, last, min, max, prod functions to GroupBy (#994,
- #1043)
- - Dates can be split across multiple columns (#1227, #1186)
+ - Series/DataFrame.update methods, in-place variant of combine_first (GH961_)
+ - Add ``match`` function to API (GH502_)
+ - Add Cython-optimized first, last, min, max, prod functions to GroupBy (GH994_,
+ GH1043_)
+ - Dates can be split across multiple columns (GH1227_, GH1186_)
- Add experimental support for converting pandas DataFrame to R data.frame
- via rpy2 (#350, #1212)
+ via rpy2 (GH350_, GH1212_)
- Can pass list of (name, function) to GroupBy.aggregate to get aggregates in
- a particular order (#610)
+ a particular order (GH610_)
- Can pass dicts with lists of functions or dicts to GroupBy aggregate to do
- much more flexible multiple function aggregation (#642, #610)
+ much more flexible multiple function aggregation (GH642_, GH610_)
- New ordered_merge functions for merging DataFrames with ordered
- data. Also supports group-wise merging for panel data (#813)
+ data. Also supports group-wise merging for panel data (GH813_)
- Add keys() method to DataFrame
- Add flexible replace method for replacing potentially values to Series and
- DataFrame (#929, #1241)
- - Add 'kde' plot kind for Series/DataFrame.plot (#1059)
+ DataFrame (GH929_, GH1241_)
+ - Add 'kde' plot kind for Series/DataFrame.plot (GH1059_)
- More flexible multiple function aggregation with GroupBy
- Add pct_change function to Series/DataFrame
- - Add option to interpolate by Index values in Series.interpolate (#1206)
+ - Add option to interpolate by Index values in Series.interpolate (GH1206_)
- Add ``max_colwidth`` option for DataFrame, defaulting to 50
- - Conversion of DataFrame through rpy2 to R data.frame (#1282, )
- - Add keys() method on DataFrame (#1240)
- - Add new ``match`` function to API (similar to R) (#502)
- - Add dayfirst option to parsers (#854)
+ - Conversion of DataFrame through rpy2 to R data.frame (GH1282_, )
+ - Add keys() method on DataFrame (GH1240_)
+ - Add new ``match`` function to API (similar to R) (GH502_)
+ - Add dayfirst option to parsers (GH854_)
- Add ``method`` argument to ``align`` method for forward/backward fillin
- (#216)
- - Add Panel.transpose method for rearranging axes (#695)
+ (GH216_)
+ - Add Panel.transpose method for rearranging axes (GH695_)
- Add new ``cut`` function (patterned after R) for discretizing data into
- equal range-length bins or arbitrary breaks of your choosing (#415)
- - Add new ``qcut`` for cutting with quantiles (#1378)
- - Add ``value_counts`` top level array method (#1392)
- - Added Andrews curves plot tupe (#1325)
- - Add lag plot (#1440)
- - Add autocorrelation_plot (#1425)
- - Add support for tox and Travis CI (#1382)
- - Add support for Categorical use in GroupBy (#292)
- - Add ``any`` and ``all`` methods to DataFrame (#1416)
+ equal range-length bins or arbitrary breaks of your choosing (GH415_)
+ - Add new ``qcut`` for cutting with quantiles (GH1378_)
+ - Add ``value_counts`` top level array method (GH1392_)
+ - Added Andrews curves plot tupe (GH1325_)
+ - Add lag plot (GH1440_)
+ - Add autocorrelation_plot (GH1425_)
+ - Add support for tox and Travis CI (GH1382_)
+ - Add support for Categorical use in GroupBy (GH292_)
+ - Add ``any`` and ``all`` methods to DataFrame (GH1416_)
- Add ``secondary_y`` option to Series.plot
- Add experimental ``lreshape`` function for reshaping wide to long
@@ -711,47 +1118,47 @@ pandas 0.8.0
- Switch to klib/khash-based hash tables in Index classes for better
performance in many cases and lower memory footprint
- Shipping some functions from scipy.stats to reduce dependency,
- e.g. Series.describe and DataFrame.describe (GH #1092)
+ e.g. Series.describe and DataFrame.describe (GH1092_)
- Can create MultiIndex by passing list of lists or list of arrays to Series,
- DataFrame constructor, etc. (#831)
- - Can pass arrays in addition to column names to DataFrame.set_index (#402)
+ DataFrame constructor, etc. (GH831_)
+ - Can pass arrays in addition to column names to DataFrame.set_index (GH402_)
- Improve the speed of "square" reindexing of homogeneous DataFrame objects
- by significant margin (#836)
- - Handle more dtypes when passed MaskedArrays in DataFrame constructor (#406)
- - Improved performance of join operations on integer keys (#682)
+ by significant margin (GH836_)
+ - Handle more dtypes when passed MaskedArrays in DataFrame constructor (GH406_)
+ - Improved performance of join operations on integer keys (GH682_)
- Can pass multiple columns to GroupBy object, e.g. grouped[[col1, col2]] to
- only aggregate a subset of the value columns (#383)
- - Add histogram / kde plot options for scatter_matrix diagonals (#1237)
+ only aggregate a subset of the value columns (GH383_)
+ - Add histogram / kde plot options for scatter_matrix diagonals (GH1237_)
- Add inplace option to Series/DataFrame.rename and sort_index,
- DataFrame.drop_duplicates (#805, #207)
- - More helpful error message when nothing passed to Series.reindex (#1267)
- - Can mix array and scalars as dict-value inputs to DataFrame ctor (#1329)
+ DataFrame.drop_duplicates (GH805_, GH207_)
+ - More helpful error message when nothing passed to Series.reindex (GH1267_)
+ - Can mix array and scalars as dict-value inputs to DataFrame ctor (GH1329_)
- Use DataFrame columns' name for legend title in plots
- Preserve frequency in DatetimeIndex when possible in boolean indexing
operations
- - Promote datetime.date values in data alignment operations (#867)
- - Add ``order`` method to Index classes (#1028)
- - Avoid hash table creation in large monotonic hash table indexes (#1160)
- - Store time zones in HDFStore (#1232)
+ - Promote datetime.date values in data alignment operations (GH867_)
+ - Add ``order`` method to Index classes (GH1028_)
+ - Avoid hash table creation in large monotonic hash table indexes (GH1160_)
+ - Store time zones in HDFStore (GH1232_)
- Enable storage of sparse data structures in HDFStore (#85)
- Enable Series.asof to work with arrays of timestamp inputs
- - Cython implementation of DataFrame.corr speeds up by > 100x (#1349, #1354)
- - Exclude "nuisance" columns automatically in GroupBy.transform (#1364)
- - Support functions-as-strings in GroupBy.transform (#1362)
- - Use index name as xlabel/ylabel in plots (#1415)
+ - Cython implementation of DataFrame.corr speeds up by > 100x (GH1349_, GH1354_)
+ - Exclude "nuisance" columns automatically in GroupBy.transform (GH1364_)
+ - Support functions-as-strings in GroupBy.transform (GH1362_)
+ - Use index name as xlabel/ylabel in plots (GH1415_)
- Add ``convert_dtype`` option to Series.apply to be able to leave data as
- dtype=object (#1414)
- - Can specify all index level names in concat (#1419)
- - Add ``dialect`` keyword to parsers for quoting conventions (#1363)
- - Enable DataFrame[bool_DataFrame] += value (#1366)
+ dtype=object (GH1414_)
+ - Can specify all index level names in concat (GH1419_)
+ - Add ``dialect`` keyword to parsers for quoting conventions (GH1363_)
+ - Enable DataFrame[bool_DataFrame] += value (GH1366_)
- Add ``retries`` argument to ``get_data_yahoo`` to try to prevent Yahoo! API
- 404s (#826)
+ 404s (GH826_)
- Improve performance of reshaping by using O(N) categorical sorting
- - Series names will be used for index of DataFrame if no index passed (#1494)
+ - Series names will be used for index of DataFrame if no index passed (GH1494_)
- Header argument in DataFrame.to_csv can accept a list of column names to
- use instead of the object's columns (#921)
- - Add ``raise_conflict`` argument to DataFrame.update (#1526)
- - Support file-like objects in ExcelFile (#1529)
+ use instead of the object's columns (GH921_)
+ - Add ``raise_conflict`` argument to DataFrame.update (GH1526_)
+ - Support file-like objects in ExcelFile (GH1529_)
**API Changes**
@@ -761,73 +1168,186 @@ pandas 0.8.0
- Frequency name overhaul, WEEKDAY/EOM and rules with @
deprecated. get_legacy_offset_name backwards compatibility function added
- Raise ValueError in DataFrame.__nonzero__, so "if df" no longer works
- (#1073)
- - Change BDay (business day) to not normalize dates by default (#506)
+ (GH1073_)
+ - Change BDay (business day) to not normalize dates by default (GH506_)
- Remove deprecated DataMatrix name
- Default merge suffixes for overlap now have underscores instead of periods
- to facilitate tab completion, etc. (#1239)
+ to facilitate tab completion, etc. (GH1239_)
- Deprecation of offset, time_rule timeRule parameters throughout codebase
- Series.append and DataFrame.append no longer check for duplicate indexes
- by default, add verify_integrity parameter (#1394)
+ by default, add verify_integrity parameter (GH1394_)
- Refactor Factor class, old constructor moved to Factor.from_array
- Modified internals of MultiIndex to use less memory (no longer represented
as array of tuples) internally, speed up construction time and many methods
- which construct intermediate hierarchical indexes (#1467)
+ which construct intermediate hierarchical indexes (GH1467_)
**Bug fixes**
- Fix OverflowError from storing pre-1970 dates in HDFStore by switching to
- datetime64 (GH #179)
+ datetime64 (GH179_)
- Fix logical error with February leap year end in YearEnd offset
- - Series([False, nan]) was getting casted to float64 (GH #1074)
+ - Series([False, nan]) was getting casted to float64 (GH1074_)
- Fix binary operations between boolean Series and object Series with
- booleans and NAs (GH #1074, #1079)
+ booleans and NAs (GH1074_, GH1079_)
- Couldn't assign whole array to column in mixed-type DataFrame via .ix
- (#1142)
- - Fix label slicing issues with float index values (#1167)
- - Fix segfault caused by empty groups passed to groupby (#1048)
- - Fix occasionally misbehaved reindexing in the presence of NaN labels (#522)
- - Fix imprecise logic causing weird Series results from .apply (#1183)
+ (GH1142_)
+ - Fix label slicing issues with float index values (GH1167_)
+ - Fix segfault caused by empty groups passed to groupby (GH1048_)
+ - Fix occasionally misbehaved reindexing in the presence of NaN labels (GH522_)
+ - Fix imprecise logic causing weird Series results from .apply (GH1183_)
- Unstack multiple levels in one shot, avoiding empty columns in some
- cases. Fix pivot table bug (#1181)
+ cases. Fix pivot table bug (GH1181_)
- Fix formatting of MultiIndex on Series/DataFrame when index name coincides
- with label (#1217)
- - Handle Excel 2003 #N/A as NaN from xlrd (#1213, #1225)
+ with label (GH1217_)
+ - Handle Excel 2003 #N/A as NaN from xlrd (GH1213_, GH1225_)
- Fix timestamp locale-related deserialization issues with HDFStore by moving
- to datetime64 representation (#1081, #809)
- - Fix DataFrame.duplicated/drop_duplicates NA value handling (#557)
- - Actually raise exceptions in fast reducer (#1243)
- - Fix various timezone-handling bugs from 0.7.3 (#969)
- - GroupBy on level=0 discarded index name (#1313)
- - Better error message with unmergeable DataFrames (#1307)
- - Series.__repr__ alignment fix with unicode index values (#1279)
- - Better error message if nothing passed to reindex (#1267)
- - More robust NA handling in DataFrame.drop_duplicates (#557)
+ to datetime64 representation (GH1081_, GH809_)
+ - Fix DataFrame.duplicated/drop_duplicates NA value handling (GH557_)
+ - Actually raise exceptions in fast reducer (GH1243_)
+ - Fix various timezone-handling bugs from 0.7.3 (GH969_)
+ - GroupBy on level=0 discarded index name (GH1313_)
+ - Better error message with unmergeable DataFrames (GH1307_)
+ - Series.__repr__ alignment fix with unicode index values (GH1279_)
+ - Better error message if nothing passed to reindex (GH1267_)
+ - More robust NA handling in DataFrame.drop_duplicates (GH557_)
- Resolve locale-based and pre-epoch HDF5 timestamp deserialization issues
- (#973, #1081, #179)
- - Implement Series.repeat (#1229)
- - Fix indexing with namedtuple and other tuple subclasses (#1026)
- - Fix float64 slicing bug (#1167)
- - Parsing integers with commas (#796)
- - Fix groupby improper data type when group consists of one value (#1065)
+ (GH973_, GH1081_, GH179_)
+ - Implement Series.repeat (GH1229_)
+ - Fix indexing with namedtuple and other tuple subclasses (GH1026_)
+ - Fix float64 slicing bug (GH1167_)
+ - Parsing integers with commas (GH796_)
+ - Fix groupby improper data type when group consists of one value (GH1065_)
- Fix negative variance possibility in nanvar resulting from floating point
- error (#1090)
- - Consistently set name on groupby pieces (#184)
- - Treat dict return values as Series in GroupBy.apply (#823)
- - Respect column selection for DataFrame in in GroupBy.transform (#1365)
- - Fix MultiIndex partial indexing bug (#1352)
- - Enable assignment of rows in mixed-type DataFrame via .ix (#1432)
- - Reset index mapping when grouping Series in Cython (#1423)
- - Fix outer/inner DataFrame.join with non-unique indexes (#1421)
- - Fix MultiIndex groupby bugs with empty lower levels (#1401)
- - Calling fillna with a Series will have same behavior as with dict (#1486)
- - SparseSeries reduction bug (#1375)
- - Fix unicode serialization issue in HDFStore (#1361)
- - Pass keywords to pyplot.boxplot in DataFrame.boxplot (#1493)
- - Bug fixes in MonthBegin (#1483)
- - Preserve MultiIndex names in drop (#1513)
- - Fix Panel DataFrame slice-assignment bug (#1533)
- - Don't use locals() in read_* functions (#1547)
+ error (GH1090_)
+ - Consistently set name on groupby pieces (GH184_)
+ - Treat dict return values as Series in GroupBy.apply (GH823_)
+ - Respect column selection for DataFrame in in GroupBy.transform (GH1365_)
+ - Fix MultiIndex partial indexing bug (GH1352_)
+ - Enable assignment of rows in mixed-type DataFrame via .ix (GH1432_)
+ - Reset index mapping when grouping Series in Cython (GH1423_)
+ - Fix outer/inner DataFrame.join with non-unique indexes (GH1421_)
+ - Fix MultiIndex groupby bugs with empty lower levels (GH1401_)
+ - Calling fillna with a Series will have same behavior as with dict (GH1486_)
+ - SparseSeries reduction bug (GH1375_)
+ - Fix unicode serialization issue in HDFStore (GH1361_)
+ - Pass keywords to pyplot.boxplot in DataFrame.boxplot (GH1493_)
+ - Bug fixes in MonthBegin (GH1483_)
+ - Preserve MultiIndex names in drop (GH1513_)
+ - Fix Panel DataFrame slice-assignment bug (GH1533_)
+ - Don't use locals() in read_* functions (GH1547_)
+
+.. _GH222: https://github.com/pydata/pandas/issues/222
+.. _GH1018: https://github.com/pydata/pandas/issues/1018
+.. _GH391: https://github.com/pydata/pandas/issues/391
+.. _GH864: https://github.com/pydata/pandas/issues/864
+.. _GH825: https://github.com/pydata/pandas/issues/825
+.. _GH961: https://github.com/pydata/pandas/issues/961
+.. _GH502: https://github.com/pydata/pandas/issues/502
+.. _GH994: https://github.com/pydata/pandas/issues/994
+.. _GH1043: https://github.com/pydata/pandas/issues/1043
+.. _GH1227: https://github.com/pydata/pandas/issues/1227
+.. _GH1186: https://github.com/pydata/pandas/issues/1186
+.. _GH350: https://github.com/pydata/pandas/issues/350
+.. _GH1212: https://github.com/pydata/pandas/issues/1212
+.. _GH610: https://github.com/pydata/pandas/issues/610
+.. _GH642: https://github.com/pydata/pandas/issues/642
+.. _GH813: https://github.com/pydata/pandas/issues/813
+.. _GH929: https://github.com/pydata/pandas/issues/929
+.. _GH1241: https://github.com/pydata/pandas/issues/1241
+.. _GH1059: https://github.com/pydata/pandas/issues/1059
+.. _GH1206: https://github.com/pydata/pandas/issues/1206
+.. _GH1282: https://github.com/pydata/pandas/issues/1282
+.. _GH1240: https://github.com/pydata/pandas/issues/1240
+.. _GH854: https://github.com/pydata/pandas/issues/854
+.. _GH216: https://github.com/pydata/pandas/issues/216
+.. _GH695: https://github.com/pydata/pandas/issues/695
+.. _GH415: https://github.com/pydata/pandas/issues/415
+.. _GH1378: https://github.com/pydata/pandas/issues/1378
+.. _GH1392: https://github.com/pydata/pandas/issues/1392
+.. _GH1325: https://github.com/pydata/pandas/issues/1325
+.. _GH1440: https://github.com/pydata/pandas/issues/1440
+.. _GH1425: https://github.com/pydata/pandas/issues/1425
+.. _GH1382: https://github.com/pydata/pandas/issues/1382
+.. _GH292: https://github.com/pydata/pandas/issues/292
+.. _GH1416: https://github.com/pydata/pandas/issues/1416
+.. _GH1092: https://github.com/pydata/pandas/issues/1092
+.. _GH831: https://github.com/pydata/pandas/issues/831
+.. _GH402: https://github.com/pydata/pandas/issues/402
+.. _GH836: https://github.com/pydata/pandas/issues/836
+.. _GH406: https://github.com/pydata/pandas/issues/406
+.. _GH682: https://github.com/pydata/pandas/issues/682
+.. _GH383: https://github.com/pydata/pandas/issues/383
+.. _GH1237: https://github.com/pydata/pandas/issues/1237
+.. _GH805: https://github.com/pydata/pandas/issues/805
+.. _GH207: https://github.com/pydata/pandas/issues/207
+.. _GH1267: https://github.com/pydata/pandas/issues/1267
+.. _GH1329: https://github.com/pydata/pandas/issues/1329
+.. _GH867: https://github.com/pydata/pandas/issues/867
+.. _GH1028: https://github.com/pydata/pandas/issues/1028
+.. _GH1160: https://github.com/pydata/pandas/issues/1160
+.. _GH1232: https://github.com/pydata/pandas/issues/1232
+.. _GH1349: https://github.com/pydata/pandas/issues/1349
+.. _GH1354: https://github.com/pydata/pandas/issues/1354
+.. _GH1364: https://github.com/pydata/pandas/issues/1364
+.. _GH1362: https://github.com/pydata/pandas/issues/1362
+.. _GH1415: https://github.com/pydata/pandas/issues/1415
+.. _GH1414: https://github.com/pydata/pandas/issues/1414
+.. _GH1419: https://github.com/pydata/pandas/issues/1419
+.. _GH1363: https://github.com/pydata/pandas/issues/1363
+.. _GH1366: https://github.com/pydata/pandas/issues/1366
+.. _GH826: https://github.com/pydata/pandas/issues/826
+.. _GH1494: https://github.com/pydata/pandas/issues/1494
+.. _GH921: https://github.com/pydata/pandas/issues/921
+.. _GH1526: https://github.com/pydata/pandas/issues/1526
+.. _GH1529: https://github.com/pydata/pandas/issues/1529
+.. _GH1073: https://github.com/pydata/pandas/issues/1073
+.. _GH506: https://github.com/pydata/pandas/issues/506
+.. _GH1239: https://github.com/pydata/pandas/issues/1239
+.. _GH1394: https://github.com/pydata/pandas/issues/1394
+.. _GH1467: https://github.com/pydata/pandas/issues/1467
+.. _GH179: https://github.com/pydata/pandas/issues/179
+.. _GH1074: https://github.com/pydata/pandas/issues/1074
+.. _GH1079: https://github.com/pydata/pandas/issues/1079
+.. _GH1142: https://github.com/pydata/pandas/issues/1142
+.. _GH1167: https://github.com/pydata/pandas/issues/1167
+.. _GH1048: https://github.com/pydata/pandas/issues/1048
+.. _GH522: https://github.com/pydata/pandas/issues/522
+.. _GH1183: https://github.com/pydata/pandas/issues/1183
+.. _GH1181: https://github.com/pydata/pandas/issues/1181
+.. _GH1217: https://github.com/pydata/pandas/issues/1217
+.. _GH1213: https://github.com/pydata/pandas/issues/1213
+.. _GH1225: https://github.com/pydata/pandas/issues/1225
+.. _GH1081: https://github.com/pydata/pandas/issues/1081
+.. _GH809: https://github.com/pydata/pandas/issues/809
+.. _GH557: https://github.com/pydata/pandas/issues/557
+.. _GH1243: https://github.com/pydata/pandas/issues/1243
+.. _GH969: https://github.com/pydata/pandas/issues/969
+.. _GH1313: https://github.com/pydata/pandas/issues/1313
+.. _GH1307: https://github.com/pydata/pandas/issues/1307
+.. _GH1279: https://github.com/pydata/pandas/issues/1279
+.. _GH973: https://github.com/pydata/pandas/issues/973
+.. _GH1229: https://github.com/pydata/pandas/issues/1229
+.. _GH1026: https://github.com/pydata/pandas/issues/1026
+.. _GH796: https://github.com/pydata/pandas/issues/796
+.. _GH1065: https://github.com/pydata/pandas/issues/1065
+.. _GH1090: https://github.com/pydata/pandas/issues/1090
+.. _GH184: https://github.com/pydata/pandas/issues/184
+.. _GH823: https://github.com/pydata/pandas/issues/823
+.. _GH1365: https://github.com/pydata/pandas/issues/1365
+.. _GH1352: https://github.com/pydata/pandas/issues/1352
+.. _GH1432: https://github.com/pydata/pandas/issues/1432
+.. _GH1423: https://github.com/pydata/pandas/issues/1423
+.. _GH1421: https://github.com/pydata/pandas/issues/1421
+.. _GH1401: https://github.com/pydata/pandas/issues/1401
+.. _GH1486: https://github.com/pydata/pandas/issues/1486
+.. _GH1375: https://github.com/pydata/pandas/issues/1375
+.. _GH1361: https://github.com/pydata/pandas/issues/1361
+.. _GH1493: https://github.com/pydata/pandas/issues/1493
+.. _GH1483: https://github.com/pydata/pandas/issues/1483
+.. _GH1513: https://github.com/pydata/pandas/issues/1513
+.. _GH1533: https://github.com/pydata/pandas/issues/1533
+.. _GH1547: https://github.com/pydata/pandas/issues/1547
+
pandas 0.7.3
============
@@ -837,66 +1357,109 @@ pandas 0.7.3
**New features / modules**
- Support for non-unique indexes: indexing and selection, many-to-one and
- many-to-many joins (#1306)
- - Added fixed-width file reader, read_fwf (PR #952)
+ many-to-many joins (GH1306_)
+ - Added fixed-width file reader, read_fwf (GH952_)
- Add group_keys argument to groupby to not add group names to MultiIndex in
- result of apply (GH #938)
- - DataFrame can now accept non-integer label slicing (GH #946). Previously
+ result of apply (GH938_)
+ - DataFrame can now accept non-integer label slicing (GH946_). Previously
only DataFrame.ix was able to do so.
- - DataFrame.apply now retains name attributes on Series objects (GH #983)
+ - DataFrame.apply now retains name attributes on Series objects (GH983_)
- Numeric DataFrame comparisons with non-numeric values now raises proper
- TypeError (GH #943). Previously raise "PandasError: DataFrame constructor
+ TypeError (GH943_). Previously raise "PandasError: DataFrame constructor
not properly called!"
- - Add ``kurt`` methods to Series and DataFrame (PR #964)
- - Can pass dict of column -> list/set NA values for text parsers (GH #754)
- - Allows users specified NA values in text parsers (GH #754)
+ - Add ``kurt`` methods to Series and DataFrame (GH964_)
+ - Can pass dict of column -> list/set NA values for text parsers (GH754_)
+ - Allows users specified NA values in text parsers (GH754_)
- Parsers checks for openpyxl dependency and raises ImportError if not found
- (PR #1007)
+ (GH1007_)
- New factory function to create HDFStore objects that can be used in a with
- statement so users do not have to explicitly call HDFStore.close (PR #1005)
- - pivot_table is now more flexible with same parameters as groupby (GH #941)
- - Added stacked bar plots (GH #987)
- - scatter_matrix method in pandas/tools/plotting.py (PR #935)
- - DataFrame.boxplot returns plot results for ex-post styling (GH #985)
- - Short version number accessible as pandas.version.short_version (GH #930)
- - Additional documentation in panel.to_frame (GH #942)
+ statement so users do not have to explicitly call HDFStore.close (GH1005_)
+ - pivot_table is now more flexible with same parameters as groupby (GH941_)
+ - Added stacked bar plots (GH987_)
+ - scatter_matrix method in pandas/tools/plotting.py (GH935_)
+ - DataFrame.boxplot returns plot results for ex-post styling (GH985_)
+ - Short version number accessible as pandas.version.short_version (GH930_)
+ - Additional documentation in panel.to_frame (GH942_)
- More informative Series.apply docstring regarding element-wise apply
- (GH #977)
- - Notes on rpy2 installation (GH #1006)
- - Add rotation and font size options to hist method (#1012)
+ (GH977_)
+ - Notes on rpy2 installation (GH1006_)
+ - Add rotation and font size options to hist method (GH1012_)
- Use exogenous / X variable index in result of OLS.y_predict. Add
- OLS.predict method (PR #1027, #1008)
+ OLS.predict method (GH1027_, GH1008_)
**API Changes**
- Calling apply on grouped Series, e.g. describe(), will no longer yield
DataFrame by default. Will have to call unstack() to get prior behavior
- - NA handling in non-numeric comparisons has been tightened up (#933, #953)
- - No longer assign dummy names key_0, key_1, etc. to groupby index (#1291)
+ - NA handling in non-numeric comparisons has been tightened up (GH933_, GH953_)
+ - No longer assign dummy names key_0, key_1, etc. to groupby index (GH1291_)
**Bug fixes**
- Fix logic error when selecting part of a row in a DataFrame with a
- MultiIndex index (GH #1013)
- - Series comparison with Series of differing length causes crash (GH #1016).
+ MultiIndex index (GH1013_)
+ - Series comparison with Series of differing length causes crash (GH1016_).
- Fix bug in indexing when selecting section of hierarchically-indexed row
- (GH #1013)
- - DataFrame.plot(logy=True) has no effect (GH #1011).
- - Broken arithmetic operations between SparsePanel-Panel (GH #1015)
- - Unicode repr issues in MultiIndex with non-ascii characters (GH #1010)
+ (GH1013_)
+ - DataFrame.plot(logy=True) has no effect (GH1011_).
+ - Broken arithmetic operations between SparsePanel-Panel (GH1015_)
+ - Unicode repr issues in MultiIndex with non-ascii characters (GH1010_)
- DataFrame.lookup() returns inconsistent results if exact match not present
- (GH #1001)
- - DataFrame arithmetic operations not treating None as NA (GH #992)
- - DataFrameGroupBy.apply returns incorrect result (GH #991)
- - Series.reshape returns incorrect result for multiple dimensions (GH #989)
- - Series.std and Series.var ignores ddof parameter (GH #934)
- - DataFrame.append loses index names (GH #980)
- - DataFrame.plot(kind='bar') ignores color argument (GH #958)
- - Inconsistent Index comparison results (GH #948)
- - Improper int dtype DataFrame construction from data with NaN (GH #846)
- - Removes default 'result' name in grouby results (GH #995)
- - DataFrame.from_records no longer mutate input columns (PR #975)
- - Use Index name when grouping by it (#1313)
+ (GH1001_)
+ - DataFrame arithmetic operations not treating None as NA (GH992_)
+ - DataFrameGroupBy.apply returns incorrect result (GH991_)
+ - Series.reshape returns incorrect result for multiple dimensions (GH989_)
+ - Series.std and Series.var ignores ddof parameter (GH934_)
+ - DataFrame.append loses index names (GH980_)
+ - DataFrame.plot(kind='bar') ignores color argument (GH958_)
+ - Inconsistent Index comparison results (GH948_)
+ - Improper int dtype DataFrame construction from data with NaN (GH846_)
+ - Removes default 'result' name in grouby results (GH995_)
+ - DataFrame.from_records no longer mutate input columns (GH975_)
+ - Use Index name when grouping by it (GH1313_)
+
+.. _GH1306: https://github.com/pydata/pandas/issues/1306
+.. _GH952: https://github.com/pydata/pandas/issues/952
+.. _GH938: https://github.com/pydata/pandas/issues/938
+.. _GH946: https://github.com/pydata/pandas/issues/946
+.. _GH983: https://github.com/pydata/pandas/issues/983
+.. _GH943: https://github.com/pydata/pandas/issues/943
+.. _GH964: https://github.com/pydata/pandas/issues/964
+.. _GH754: https://github.com/pydata/pandas/issues/754
+.. _GH1007: https://github.com/pydata/pandas/issues/1007
+.. _GH1005: https://github.com/pydata/pandas/issues/1005
+.. _GH941: https://github.com/pydata/pandas/issues/941
+.. _GH987: https://github.com/pydata/pandas/issues/987
+.. _GH935: https://github.com/pydata/pandas/issues/935
+.. _GH985: https://github.com/pydata/pandas/issues/985
+.. _GH930: https://github.com/pydata/pandas/issues/930
+.. _GH942: https://github.com/pydata/pandas/issues/942
+.. _GH977: https://github.com/pydata/pandas/issues/977
+.. _GH1006: https://github.com/pydata/pandas/issues/1006
+.. _GH1012: https://github.com/pydata/pandas/issues/1012
+.. _GH1027: https://github.com/pydata/pandas/issues/1027
+.. _GH1008: https://github.com/pydata/pandas/issues/1008
+.. _GH933: https://github.com/pydata/pandas/issues/933
+.. _GH953: https://github.com/pydata/pandas/issues/953
+.. _GH1291: https://github.com/pydata/pandas/issues/1291
+.. _GH1013: https://github.com/pydata/pandas/issues/1013
+.. _GH1016: https://github.com/pydata/pandas/issues/1016
+.. _GH1011: https://github.com/pydata/pandas/issues/1011
+.. _GH1015: https://github.com/pydata/pandas/issues/1015
+.. _GH1010: https://github.com/pydata/pandas/issues/1010
+.. _GH1001: https://github.com/pydata/pandas/issues/1001
+.. _GH992: https://github.com/pydata/pandas/issues/992
+.. _GH991: https://github.com/pydata/pandas/issues/991
+.. _GH989: https://github.com/pydata/pandas/issues/989
+.. _GH934: https://github.com/pydata/pandas/issues/934
+.. _GH980: https://github.com/pydata/pandas/issues/980
+.. _GH958: https://github.com/pydata/pandas/issues/958
+.. _GH948: https://github.com/pydata/pandas/issues/948
+.. _GH846: https://github.com/pydata/pandas/issues/846
+.. _GH995: https://github.com/pydata/pandas/issues/995
+.. _GH975: https://github.com/pydata/pandas/issues/975
+.. _GH1313: https://github.com/pydata/pandas/issues/1313
+
pandas 0.7.2
============
@@ -905,55 +1468,93 @@ pandas 0.7.2
**New features / modules**
- - Add additional tie-breaking methods in DataFrame.rank (#874)
- - Add ascending parameter to rank in Series, DataFrame (#875)
- - Add coerce_float option to DataFrame.from_records (#893)
- - Add sort_columns parameter to allow unsorted plots (#918)
+ - Add additional tie-breaking methods in DataFrame.rank (GH874_)
+ - Add ascending parameter to rank in Series, DataFrame (GH875_)
+ - Add coerce_float option to DataFrame.from_records (GH893_)
+ - Add sort_columns parameter to allow unsorted plots (GH918_)
- IPython tab completion on GroupBy objects
**API Changes**
- Series.sum returns 0 instead of NA when called on an empty
series. Analogously for a DataFrame whose rows or columns are length 0
- (#844)
+ (GH844_)
**Improvements to existing features**
- - Don't use groups dict in Grouper.size (#860)
- - Use khash for Series.value_counts, add raw function to algorithms.py (#861)
- - Enable column access via attributes on GroupBy (#882)
+ - Don't use groups dict in Grouper.size (GH860_)
+ - Use khash for Series.value_counts, add raw function to algorithms.py (GH861_)
+ - Enable column access via attributes on GroupBy (GH882_)
- Enable setting existing columns (only) via attributes on DataFrame, Panel
- (#883)
- - Intercept __builtin__.sum in groupby (#885)
- - Can pass dict to DataFrame.fillna to use different values per column (#661)
+ (GH883_)
+ - Intercept __builtin__.sum in groupby (GH885_)
+ - Can pass dict to DataFrame.fillna to use different values per column (GH661_)
- Can select multiple hierarchical groups by passing list of values in .ix
- (#134)
- - Add level keyword to ``drop`` for dropping values from a level (GH #159)
+ (GH134_)
+ - Add level keyword to ``drop`` for dropping values from a level (GH159_)
- Add ``coerce_float`` option on DataFrame.from_records (# 893)
- Raise exception if passed date_parser fails in ``read_csv``
- - Add ``axis`` option to DataFrame.fillna (#174)
- - Fixes to Panel to make it easier to subclass (PR #888)
+ - Add ``axis`` option to DataFrame.fillna (GH174_)
+ - Fixes to Panel to make it easier to subclass (GH888_)
**Bug fixes**
- - Fix overflow-related bugs in groupby (#850, #851)
- - Fix unhelpful error message in parsers (#856)
- - Better err msg for failed boolean slicing of dataframe (#859)
- - Series.count cannot accept a string (level name) in the level argument (#869)
- - Group index platform int check (#870)
- - concat on axis=1 and ignore_index=True raises TypeError (#871)
- - Further unicode handling issues resolved (#795)
- - Fix failure in multiindex-based access in Panel (#880)
- - Fix DataFrame boolean slice assignment failure (#881)
- - Fix combineAdd NotImplementedError for SparseDataFrame (#887)
- - Fix DataFrame.to_html encoding and columns (#890, #891, #909)
- - Fix na-filling handling in mixed-type DataFrame (#910)
- - Fix to DataFrame.set_value with non-existant row/col (#911)
- - Fix malformed block in groupby when excluding nuisance columns (#916)
- - Fix inconsistant NA handling in dtype=object arrays (#925)
- - Fix missing center-of-mass computation in ewmcov (#862)
- - Don't raise exception when opening read-only HDF5 file (#847)
- - Fix possible out-of-bounds memory access in 0-length Series (#917)
+ - Fix overflow-related bugs in groupby (GH850_, GH851_)
+ - Fix unhelpful error message in parsers (GH856_)
+ - Better err msg for failed boolean slicing of dataframe (GH859_)
+ - Series.count cannot accept a string (level name) in the level argument (GH869_)
+ - Group index platform int check (GH870_)
+ - concat on axis=1 and ignore_index=True raises TypeError (GH871_)
+ - Further unicode handling issues resolved (GH795_)
+ - Fix failure in multiindex-based access in Panel (GH880_)
+ - Fix DataFrame boolean slice assignment failure (GH881_)
+ - Fix combineAdd NotImplementedError for SparseDataFrame (GH887_)
+ - Fix DataFrame.to_html encoding and columns (GH890_, GH891_, GH909_)
+ - Fix na-filling handling in mixed-type DataFrame (GH910_)
+ - Fix to DataFrame.set_value with non-existant row/col (GH911_)
+ - Fix malformed block in groupby when excluding nuisance columns (GH916_)
+ - Fix inconsistant NA handling in dtype=object arrays (GH925_)
+ - Fix missing center-of-mass computation in ewmcov (GH862_)
+ - Don't raise exception when opening read-only HDF5 file (GH847_)
+ - Fix possible out-of-bounds memory access in 0-length Series (GH917_)
+
+.. _GH874: https://github.com/pydata/pandas/issues/874
+.. _GH875: https://github.com/pydata/pandas/issues/875
+.. _GH893: https://github.com/pydata/pandas/issues/893
+.. _GH918: https://github.com/pydata/pandas/issues/918
+.. _GH844: https://github.com/pydata/pandas/issues/844
+.. _GH860: https://github.com/pydata/pandas/issues/860
+.. _GH861: https://github.com/pydata/pandas/issues/861
+.. _GH882: https://github.com/pydata/pandas/issues/882
+.. _GH883: https://github.com/pydata/pandas/issues/883
+.. _GH885: https://github.com/pydata/pandas/issues/885
+.. _GH661: https://github.com/pydata/pandas/issues/661
+.. _GH134: https://github.com/pydata/pandas/issues/134
+.. _GH159: https://github.com/pydata/pandas/issues/159
+.. _GH174: https://github.com/pydata/pandas/issues/174
+.. _GH888: https://github.com/pydata/pandas/issues/888
+.. _GH850: https://github.com/pydata/pandas/issues/850
+.. _GH851: https://github.com/pydata/pandas/issues/851
+.. _GH856: https://github.com/pydata/pandas/issues/856
+.. _GH859: https://github.com/pydata/pandas/issues/859
+.. _GH869: https://github.com/pydata/pandas/issues/869
+.. _GH870: https://github.com/pydata/pandas/issues/870
+.. _GH871: https://github.com/pydata/pandas/issues/871
+.. _GH795: https://github.com/pydata/pandas/issues/795
+.. _GH880: https://github.com/pydata/pandas/issues/880
+.. _GH881: https://github.com/pydata/pandas/issues/881
+.. _GH887: https://github.com/pydata/pandas/issues/887
+.. _GH890: https://github.com/pydata/pandas/issues/890
+.. _GH891: https://github.com/pydata/pandas/issues/891
+.. _GH909: https://github.com/pydata/pandas/issues/909
+.. _GH910: https://github.com/pydata/pandas/issues/910
+.. _GH911: https://github.com/pydata/pandas/issues/911
+.. _GH916: https://github.com/pydata/pandas/issues/916
+.. _GH925: https://github.com/pydata/pandas/issues/925
+.. _GH862: https://github.com/pydata/pandas/issues/862
+.. _GH847: https://github.com/pydata/pandas/issues/847
+.. _GH917: https://github.com/pydata/pandas/issues/917
+
pandas 0.7.1
============
@@ -963,46 +1564,74 @@ pandas 0.7.1
**New features / modules**
- Add ``to_clipboard`` function to pandas namespace for writing objects to
- the system clipboard (#774)
+ the system clipboard (GH774_)
- Add ``itertuples`` method to DataFrame for iterating through the rows of a
- dataframe as tuples (#818)
+ dataframe as tuples (GH818_)
- Add ability to pass fill_value and method to DataFrame and Series align
- method (#806, #807)
- - Add fill_value option to reindex, align methods (#784)
- - Enable concat to produce DataFrame from Series (#787)
- - Add ``between`` method to Series (#802)
+ method (GH806_, GH807_)
+ - Add fill_value option to reindex, align methods (GH784_)
+ - Enable concat to produce DataFrame from Series (GH787_)
+ - Add ``between`` method to Series (GH802_)
- Add HTML representation hook to DataFrame for the IPython HTML notebook
- (#773)
+ (GH773_)
- Support for reading Excel 2007 XML documents using openpyxl
**Improvements to existing features**
- Improve performance and memory usage of fillna on DataFrame
- - Can concatenate a list of Series along axis=1 to obtain a DataFrame (#787)
+ - Can concatenate a list of Series along axis=1 to obtain a DataFrame (GH787_)
**Bug fixes**
- Fix memory leak when inserting large number of columns into a single
- DataFrame (#790)
+ DataFrame (GH790_)
- Appending length-0 DataFrame with new columns would not result in those new
- columns being part of the resulting concatenated DataFrame (#782)
+ columns being part of the resulting concatenated DataFrame (GH782_)
- Fixed groupby corner case when passing dictionary grouper and as_index is
- False (#819)
- - Fixed bug whereby bool array sometimes had object dtype (#820)
- - Fix exception thrown on np.diff (#816)
- - Fix to_records where columns are non-strings (#822)
- - Fix Index.intersection where indices have incomparable types (#811)
- - Fix ExcelFile throwing an exception for two-line file (#837)
- - Add clearer error message in csv parser (#835)
- - Fix loss of fractional seconds in HDFStore (#513)
- - Fix DataFrame join where columns have datetimes (#787)
- - Work around numpy performance issue in take (#817)
- - Improve comparison operations for NA-friendliness (#801)
- - Fix indexing operation for floating point values (#780, #798)
- - Fix groupby case resulting in malformed dataframe (#814)
- - Fix behavior of reindex of Series dropping name (#812)
- - Improve on redudant groupby computation (#775)
- - Catch possible NA assignment to int/bool series with exception (#839)
+ False (GH819_)
+ - Fixed bug whereby bool array sometimes had object dtype (GH820_)
+ - Fix exception thrown on np.diff (GH816_)
+ - Fix to_records where columns are non-strings (GH822_)
+ - Fix Index.intersection where indices have incomparable types (GH811_)
+ - Fix ExcelFile throwing an exception for two-line file (GH837_)
+ - Add clearer error message in csv parser (GH835_)
+ - Fix loss of fractional seconds in HDFStore (GH513_)
+ - Fix DataFrame join where columns have datetimes (GH787_)
+ - Work around numpy performance issue in take (GH817_)
+ - Improve comparison operations for NA-friendliness (GH801_)
+ - Fix indexing operation for floating point values (GH780_, GH798_)
+ - Fix groupby case resulting in malformed dataframe (GH814_)
+ - Fix behavior of reindex of Series dropping name (GH812_)
+ - Improve on redudant groupby computation (GH775_)
+ - Catch possible NA assignment to int/bool series with exception (GH839_)
+
+.. _GH774: https://github.com/pydata/pandas/issues/774
+.. _GH818: https://github.com/pydata/pandas/issues/818
+.. _GH806: https://github.com/pydata/pandas/issues/806
+.. _GH807: https://github.com/pydata/pandas/issues/807
+.. _GH784: https://github.com/pydata/pandas/issues/784
+.. _GH787: https://github.com/pydata/pandas/issues/787
+.. _GH802: https://github.com/pydata/pandas/issues/802
+.. _GH773: https://github.com/pydata/pandas/issues/773
+.. _GH790: https://github.com/pydata/pandas/issues/790
+.. _GH782: https://github.com/pydata/pandas/issues/782
+.. _GH819: https://github.com/pydata/pandas/issues/819
+.. _GH820: https://github.com/pydata/pandas/issues/820
+.. _GH816: https://github.com/pydata/pandas/issues/816
+.. _GH822: https://github.com/pydata/pandas/issues/822
+.. _GH811: https://github.com/pydata/pandas/issues/811
+.. _GH837: https://github.com/pydata/pandas/issues/837
+.. _GH835: https://github.com/pydata/pandas/issues/835
+.. _GH513: https://github.com/pydata/pandas/issues/513
+.. _GH817: https://github.com/pydata/pandas/issues/817
+.. _GH801: https://github.com/pydata/pandas/issues/801
+.. _GH780: https://github.com/pydata/pandas/issues/780
+.. _GH798: https://github.com/pydata/pandas/issues/798
+.. _GH814: https://github.com/pydata/pandas/issues/814
+.. _GH812: https://github.com/pydata/pandas/issues/812
+.. _GH775: https://github.com/pydata/pandas/issues/775
+.. _GH839: https://github.com/pydata/pandas/issues/839
+
pandas 0.7.0
============
@@ -1013,16 +1642,16 @@ pandas 0.7.0
- New ``merge`` function for efficiently performing full gamut of database /
relational-algebra operations. Refactored existing join methods to use the
- new infrastructure, resulting in substantial performance gains (GH #220,
- #249, #267)
+ new infrastructure, resulting in substantial performance gains (GH220_,
+ GH249_, GH267_)
- New ``concat`` function for concatenating DataFrame or Panel objects along
an axis. Can form union or intersection of the other axes. Improves
- performance of ``DataFrame.append`` (#468, #479, #273)
- - Handle differently-indexed output values in ``DataFrame.apply`` (GH #498)
+ performance of ``DataFrame.append`` (GH468_, GH479_, GH273_)
+ - Handle differently-indexed output values in ``DataFrame.apply`` (GH498_)
- Can pass list of dicts (e.g., a list of shallow JSON objects) to DataFrame
- constructor (GH #526)
- - Add ``reorder_levels`` method to Series and DataFrame (PR #534)
- - Add dict-like ``get`` function to DataFrame and Panel (PR #521)
+ constructor (GH526_)
+ - Add ``reorder_levels`` method to Series and DataFrame (GH534_)
+ - Add dict-like ``get`` function to DataFrame and Panel (GH521_)
- ``DataFrame.iterrows`` method for efficiently iterating through the rows of
a DataFrame
- Added ``DataFrame.to_panel`` with code adapted from ``LongPanel.to_long``
@@ -1030,56 +1659,56 @@ pandas 0.7.0
- Add ``level`` option to binary arithmetic functions on ``DataFrame`` and
``Series``
- Add ``level`` option to the ``reindex`` and ``align`` methods on Series and
- DataFrame for broadcasting values across a level (GH #542, PR #552, others)
+ DataFrame for broadcasting values across a level (GH542_, GH552_, others)
- Add attribute-based item access to ``Panel`` and add IPython completion (PR
- #554)
+ GH554_)
- Add ``logy`` option to ``Series.plot`` for log-scaling on the Y axis
- Add ``index``, ``header``, and ``justify`` options to
- ``DataFrame.to_string``. Add option to (GH #570, GH #571)
- - Can pass multiple DataFrames to ``DataFrame.join`` to join on index (GH #115)
- - Can pass multiple Panels to ``Panel.join`` (GH #115)
+ ``DataFrame.to_string``. Add option to (GH570_, GH571_)
+ - Can pass multiple DataFrames to ``DataFrame.join`` to join on index (GH115_)
+ - Can pass multiple Panels to ``Panel.join`` (GH115_)
- Can pass multiple DataFrames to `DataFrame.append` to concatenate (stack)
and multiple Series to ``Series.append`` too
- Added ``justify`` argument to ``DataFrame.to_string`` to allow different
alignment of column headers
- Add ``sort`` option to GroupBy to allow disabling sorting of the group keys
- for potential speedups (GH #595)
- - Can pass MaskedArray to Series constructor (PR #563)
- - Add Panel item access via attributes and IPython completion (GH #554)
+ for potential speedups (GH595_)
+ - Can pass MaskedArray to Series constructor (GH563_)
+ - Add Panel item access via attributes and IPython completion (GH554_)
- Implement ``DataFrame.lookup``, fancy-indexing analogue for retrieving
- values given a sequence of row and column labels (GH #338)
+ values given a sequence of row and column labels (GH338_)
- Add ``verbose`` option to ``read_csv`` and ``read_table`` to show number of
- NA values inserted in non-numeric columns (GH #614)
+ NA values inserted in non-numeric columns (GH614_)
- Can pass a list of dicts or Series to ``DataFrame.append`` to concatenate
- multiple rows (GH #464)
+ multiple rows (GH464_)
- Add ``level`` argument to ``DataFrame.xs`` for selecting data from other
MultiIndex levels. Can take one or more levels with potentially a tuple of
- keys for flexible retrieval of data (GH #371, GH #629)
- - New ``crosstab`` function for easily computing frequency tables (GH #170)
+ keys for flexible retrieval of data (GH371_, GH629_)
+ - New ``crosstab`` function for easily computing frequency tables (GH170_)
- Can pass a list of functions to aggregate with groupby on a DataFrame,
- yielding an aggregated result with hierarchical columns (GH #166)
+ yielding an aggregated result with hierarchical columns (GH166_)
- Add integer-indexing functions ``iget`` in Series and ``irow`` / ``iget``
- in DataFrame (GH #628)
+ in DataFrame (GH628_)
- Add new ``Series.unique`` function, significantly faster than
- ``numpy.unique`` (GH #658)
+ ``numpy.unique`` (GH658_)
- Add new ``cummin`` and ``cummax`` instance methods to ``Series`` and
- ``DataFrame`` (GH #647)
- - Add new ``value_range`` function to return min/max of a dataframe (GH #288)
+ ``DataFrame`` (GH647_)
+ - Add new ``value_range`` function to return min/max of a dataframe (GH288_)
- Add ``drop`` parameter to ``reset_index`` method of ``DataFrame`` and added
- method to ``Series`` as well (GH #699)
+ method to ``Series`` as well (GH699_)
- Add ``isin`` method to Index objects, works just like ``Series.isin`` (GH
- #657)
- - Implement array interface on Panel so that ufuncs work (re: #740)
- - Add ``sort`` option to ``DataFrame.join`` (GH #731)
+ GH657_)
+ - Implement array interface on Panel so that ufuncs work (re: GH740_)
+ - Add ``sort`` option to ``DataFrame.join`` (GH731_)
- Improved handling of NAs (propagation) in binary operations with
- dtype=object arrays (GH #737)
+ dtype=object arrays (GH737_)
- Add ``abs`` method to Pandas objects
- Added ``algorithms`` module to start collecting central algos
**API Changes**
- Label-indexing with integer indexes now raises KeyError if a label is not
- found instead of falling back on location-based indexing (GH #700)
+ found instead of falling back on location-based indexing (GH700_)
- Label-based slicing via ``ix`` or ``[]`` on Series will now only work if
exact matches for the labels are found or if the index is monotonic (for
range selections)
@@ -1089,32 +1718,32 @@ pandas 0.7.0
with integer indexes when an index is not contained in the index. The prior
behavior would fall back on position-based indexing if a key was not found
in the index which would lead to subtle bugs. This is now consistent with
- the behavior of ``.ix`` on DataFrame and friends (GH #328)
+ the behavior of ``.ix`` on DataFrame and friends (GH328_)
- Rename ``DataFrame.delevel`` to ``DataFrame.reset_index`` and add
deprecation warning
- `Series.sort` (an in-place operation) called on a Series which is a view on
a larger array (e.g. a column in a DataFrame) will generate an Exception to
- prevent accidentally modifying the data source (GH #316)
- - Refactor to remove deprecated ``LongPanel`` class (PR #552)
+ prevent accidentally modifying the data source (GH316_)
+ - Refactor to remove deprecated ``LongPanel`` class (GH552_)
- Deprecated ``Panel.to_long``, renamed to ``to_frame``
- Deprecated ``colSpace`` argument in ``DataFrame.to_string``, renamed to
``col_space``
- Rename ``precision`` to ``accuracy`` in engineering float formatter (GH
- #395)
+ GH395_)
- The default delimiter for ``read_csv`` is comma rather than letting
``csv.Sniffer`` infer it
- Rename ``col_or_columns`` argument in ``DataFrame.drop_duplicates`` (GH
- #734)
+ GH734_)
**Improvements to existing features**
- Better error message in DataFrame constructor when passed column labels
- don't match data (GH #497)
+ don't match data (GH497_)
- Substantially improve performance of multi-GroupBy aggregation when a
- Python function is passed, reuse ndarray object in Cython (GH #496)
- - Can store objects indexed by tuples and floats in HDFStore (GH #492)
+ Python function is passed, reuse ndarray object in Cython (GH496_)
+ - Can store objects indexed by tuples and floats in HDFStore (GH492_)
- Don't print length by default in Series.to_string, add `length` option (GH
- #489)
+ GH489_)
- Improve Cython code for multi-groupby to aggregate without having to sort
the data (GH #93)
- Improve MultiIndex reindexing speed by storing tuples in the MultiIndex,
@@ -1126,91 +1755,91 @@ pandas 0.7.0
regression from prior versions
- Friendlier error message in setup.py if NumPy not installed
- Use common set of NA-handling operations (sum, mean, etc.) in Panel class
- also (GH #536)
+ also (GH536_)
- Default name assignment when calling ``reset_index`` on DataFrame with a
- regular (non-hierarchical) index (GH #476)
+ regular (non-hierarchical) index (GH476_)
- Use Cythonized groupers when possible in Series/DataFrame stat ops with
- ``level`` parameter passed (GH #545)
+ ``level`` parameter passed (GH545_)
- Ported skiplist data structure to C to speed up ``rolling_median`` by about
- 5-10x in most typical use cases (GH #374)
+ 5-10x in most typical use cases (GH374_)
- Some performance enhancements in constructing a Panel from a dict of
DataFrame objects
- Made ``Index._get_duplicates`` a public method by removing the underscore
- - Prettier printing of floats, and column spacing fix (GH #395, GH #571)
- - Add ``bold_rows`` option to DataFrame.to_html (GH #586)
+ - Prettier printing of floats, and column spacing fix (GH395_, GH571_)
+ - Add ``bold_rows`` option to DataFrame.to_html (GH586_)
- Improve the performance of ``DataFrame.sort_index`` by up to 5x or more
when sorting by multiple columns
- Substantially improve performance of DataFrame and Series constructors when
- passed a nested dict or dict, respectively (GH #540, GH #621)
+ passed a nested dict or dict, respectively (GH540_, GH621_)
- Modified setup.py so that pip / setuptools will install dependencies (GH
- #507, various pull requests)
+ GH507_, various pull requests)
- Unstack called on DataFrame with non-MultiIndex will return Series (GH
- #477)
+ GH477_)
- Improve DataFrame.to_string and console formatting to be more consistent in
- the number of displayed digits (GH #395)
+ the number of displayed digits (GH395_)
- Use bottleneck if available for performing NaN-friendly statistical
operations that it implemented (GH #91)
- Monkey-patch context to traceback in ``DataFrame.apply`` to indicate which
- row/column the function application failed on (GH #614)
+ row/column the function application failed on (GH614_)
- Improved ability of read_table and read_clipboard to parse
console-formatted DataFrames (can read the row of index names, etc.)
- Can pass list of group labels (without having to convert to an ndarray
- yourself) to ``groupby`` in some cases (GH #659)
+ yourself) to ``groupby`` in some cases (GH659_)
- Use ``kind`` argument to Series.order for selecting different sort kinds
- (GH #668)
- - Add option to Series.to_csv to omit the index (PR #684)
+ (GH668_)
+ - Add option to Series.to_csv to omit the index (GH684_)
- Add ``delimiter`` as an alternative to ``sep`` in ``read_csv`` and other
parsing functions
- Substantially improved performance of groupby on DataFrames with many
- columns by aggregating blocks of columns all at once (GH #745)
- - Can pass a file handle or StringIO to Series/DataFrame.to_csv (GH #765)
+ columns by aggregating blocks of columns all at once (GH745_)
+ - Can pass a file handle or StringIO to Series/DataFrame.to_csv (GH765_)
- Can pass sequence of integers to DataFrame.irow(icol) and Series.iget, (GH
- #654)
+ GH654_)
- Prototypes for some vectorized string functions
- - Add float64 hash table to solve the Series.unique problem with NAs (GH #714)
+ - Add float64 hash table to solve the Series.unique problem with NAs (GH714_)
- Memoize objects when reading from file to reduce memory footprint
- Can get and set a column of a DataFrame with hierarchical columns
containing "empty" ('') lower levels without passing the empty levels (PR
- #768)
+ GH768_)
**Bug fixes**
- Raise exception in out-of-bounds indexing of Series instead of
- seg-faulting, regression from earlier releases (GH #495)
+ seg-faulting, regression from earlier releases (GH495_)
- Fix error when joining DataFrames of different dtypes within the same
- typeclass (e.g. float32 and float64) (GH #486)
+ typeclass (e.g. float32 and float64) (GH486_)
- Fix bug in Series.min/Series.max on objects like datetime.datetime (GH
- #487)
- - Preserve index names in Index.union (GH #501)
+ GH487_)
+ - Preserve index names in Index.union (GH501_)
- Fix bug in Index joining causing subclass information (like DateRange type)
- to be lost in some cases (GH #500)
+ to be lost in some cases (GH500_)
- Accept empty list as input to DataFrame constructor, regression from 0.6.0
- (GH #491)
+ (GH491_)
- Can output DataFrame and Series with ndarray objects in a dtype=object
- array (GH #490)
+ array (GH490_)
- Return empty string from Series.to_string when called on empty Series (GH
- #488)
+ GH488_)
- Fix exception passing empty list to DataFrame.from_records
- Fix Index.format bug (excluding name field) with datetimes with time info
- Fix scalar value access in Series to always return NumPy scalars,
- regression from prior versions (GH #510)
- - Handle rows skipped at beginning of file in read_* functions (GH #505)
+ regression from prior versions (GH510_)
+ - Handle rows skipped at beginning of file in read_* functions (GH505_)
- Handle improper dtype casting in ``set_value`` methods
- Unary '-' / __neg__ operator on DataFrame was returning integer values
- Unbox 0-dim ndarrays from certain operators like all, any in Series
- Fix handling of missing columns (was combine_first-specific) in
- DataFrame.combine for general case (GH #529)
+ DataFrame.combine for general case (GH529_)
- Fix type inference logic with boolean lists and arrays in DataFrame indexing
- Use centered sum of squares in R-square computation if entity_effects=True
in panel regression
- - Handle all NA case in Series.{corr, cov}, was raising exception (GH #548)
+ - Handle all NA case in Series.{corr, cov}, was raising exception (GH548_)
- Aggregating by multiple levels with ``level`` argument to DataFrame, Series
- stat method, was broken (GH #545)
+ stat method, was broken (GH545_)
- Fix Cython buf when converter passed to read_csv produced a numeric array
(buffer dtype mismatch when passed to Cython type inference function) (GH
- #546)
+ GH546_)
- Fix exception when setting scalar value using .ix on a DataFrame with a
- MultiIndex (GH #551)
+ MultiIndex (GH551_)
- Fix outer join between two DateRanges with different offsets that returned
an invalid DateRange
- Cleanup DataFrame.from_records failure where index argument is an integer
@@ -1218,19 +1847,19 @@ pandas 0.7.0
- Fix NA handling in {Series, DataFrame}.rank with non-floating point dtypes
- Fix bug related to integer type-checking in .ix-based indexing
- Handle non-string index name passed to DataFrame.from_records
- - DataFrame.insert caused the columns name(s) field to be discarded (GH #527)
+ - DataFrame.insert caused the columns name(s) field to be discarded (GH527_)
- Fix erroneous in monotonic many-to-one left joins
- - Fix DataFrame.to_string to remove extra column white space (GH #571)
- - Format floats to default to same number of digits (GH #395)
- - Added decorator to copy docstring from one function to another (GH #449)
+ - Fix DataFrame.to_string to remove extra column white space (GH571_)
+ - Format floats to default to same number of digits (GH395_)
+ - Added decorator to copy docstring from one function to another (GH449_)
- Fix error in monotonic many-to-one left joins
- Fix __eq__ comparison between DateOffsets with different relativedelta
keywords passed
- - Fix exception caused by parser converter returning strings (GH #583)
- - Fix MultiIndex formatting bug with integer names (GH #601)
- - Fix bug in handling of non-numeric aggregates in Series.groupby (GH #612)
+ - Fix exception caused by parser converter returning strings (GH583_)
+ - Fix MultiIndex formatting bug with integer names (GH601_)
+ - Fix bug in handling of non-numeric aggregates in Series.groupby (GH612_)
- Fix TypeError with tuple subclasses (e.g. namedtuple) in
- DataFrame.from_records (GH #611)
+ DataFrame.from_records (GH611_)
- Catch misreported console size when running IPython within Emacs
- Fix minor bug in pivot table margins, loss of index names and length-1
'All' tuple in row labels
@@ -1239,52 +1868,52 @@ pandas 0.7.0
either source or target array are empty
- Could not create a new column in a DataFrame from a list of tuples
- Fix bugs preventing SparseDataFrame and SparseSeries working with groupby
- (GH #666)
- - Use sort kind in Series.sort / argsort (GH #668)
- - Fix DataFrame operations on non-scalar, non-pandas objects (GH #672)
+ (GH666_)
+ - Use sort kind in Series.sort / argsort (GH668_)
+ - Fix DataFrame operations on non-scalar, non-pandas objects (GH672_)
- Don't convert DataFrame column to integer type when passing integer to
- __setitem__ (GH #669)
+ __setitem__ (GH669_)
- Fix downstream bug in pivot_table caused by integer level names in
- MultiIndex (GH #678)
- - Fix SparseSeries.combine_first when passed a dense Series (GH #687)
+ MultiIndex (GH678_)
+ - Fix SparseSeries.combine_first when passed a dense Series (GH687_)
- Fix performance regression in HDFStore loading when DataFrame or Panel
stored in table format with datetimes
- - Raise Exception in DateRange when offset with n=0 is passed (GH #683)
+ - Raise Exception in DateRange when offset with n=0 is passed (GH683_)
- Fix get/set inconsistency with .ix property and integer location but
- non-integer index (GH #707)
+ non-integer index (GH707_)
- Use right dropna function for SparseSeries. Return dense Series for NA fill
- value (GH #730)
+ value (GH730_)
- Fix Index.format bug causing incorrectly string-formatted Series with
datetime indexes (# 726, 758)
- - Fix errors caused by object dtype arrays passed to ols (GH #759)
+ - Fix errors caused by object dtype arrays passed to ols (GH759_)
- Fix error where column names lost when passing list of labels to
- DataFrame.__getitem__, (GH #662)
+ DataFrame.__getitem__, (GH662_)
- Fix error whereby top-level week iterator overwrote week instance
- Fix circular reference causing memory leak in sparse array / series /
- frame, (GH #663)
- - Fix integer-slicing from integers-as-floats (GH #670)
+ frame, (GH663_)
+ - Fix integer-slicing from integers-as-floats (GH670_)
- Fix zero division errors in nanops from object dtype arrays in all NA case
- (GH #676)
- - Fix csv encoding when using unicode (GH #705, #717, #738)
+ (GH676_)
+ - Fix csv encoding when using unicode (GH705_, GH717_, GH738_)
- Fix assumption that each object contains every unique block type in concat,
- (GH #708)
- - Fix sortedness check of multiindex in to_panel (GH #719, 720)
+ (GH708_)
+ - Fix sortedness check of multiindex in to_panel (GH719_, 720)
- Fix that None was not treated as NA in PyObjectHashtable
- - Fix hashing dtype because of endianness confusion (GH #747, #748)
+ - Fix hashing dtype because of endianness confusion (GH747_, GH748_)
- Fix SparseSeries.dropna to return dense Series in case of NA fill value (GH
- #730)
+ GH730_)
- Use map_infer instead of np.vectorize. handle NA sentinels if converter
- yields numeric array, (GH #753)
- - Fixes and improvements to DataFrame.rank (GH #742)
+ yields numeric array, (GH753_)
+ - Fixes and improvements to DataFrame.rank (GH742_)
- Fix catching AttributeError instead of NameError for bottleneck
- - Try to cast non-MultiIndex to better dtype when calling reset_index (GH #726
- #440)
+ - Try to cast non-MultiIndex to better dtype when calling reset_index (GH726_
+ GH440_)
- Fix #1.QNAN0' float bug on 2.6/win64
- Allow subclasses of dicts in DataFrame constructor, with tests
- - Fix problem whereby set_index destroys column multiindex (GH #764)
- - Hack around bug in generating DateRange from naive DateOffset (GH #770)
+ - Fix problem whereby set_index destroys column multiindex (GH764_)
+ - Hack around bug in generating DateRange from naive DateOffset (GH770_)
- Fix bug in DateRange.intersection causing incorrect results with some
- overlapping ranges (GH #771)
+ overlapping ranges (GH771_)
Thanks
------
@@ -1318,6 +1947,115 @@ Thanks
- Pinxing Ye
- ... and everyone I forgot
+.. _GH220: https://github.com/pydata/pandas/issues/220
+.. _GH249: https://github.com/pydata/pandas/issues/249
+.. _GH267: https://github.com/pydata/pandas/issues/267
+.. _GH468: https://github.com/pydata/pandas/issues/468
+.. _GH479: https://github.com/pydata/pandas/issues/479
+.. _GH273: https://github.com/pydata/pandas/issues/273
+.. _GH498: https://github.com/pydata/pandas/issues/498
+.. _GH526: https://github.com/pydata/pandas/issues/526
+.. _GH534: https://github.com/pydata/pandas/issues/534
+.. _GH521: https://github.com/pydata/pandas/issues/521
+.. _GH542: https://github.com/pydata/pandas/issues/542
+.. _GH552: https://github.com/pydata/pandas/issues/552
+.. _GH554: https://github.com/pydata/pandas/issues/554
+.. _GH570: https://github.com/pydata/pandas/issues/570
+.. _GH571: https://github.com/pydata/pandas/issues/571
+.. _GH115: https://github.com/pydata/pandas/issues/115
+.. _GH595: https://github.com/pydata/pandas/issues/595
+.. _GH563: https://github.com/pydata/pandas/issues/563
+.. _GH338: https://github.com/pydata/pandas/issues/338
+.. _GH614: https://github.com/pydata/pandas/issues/614
+.. _GH464: https://github.com/pydata/pandas/issues/464
+.. _GH371: https://github.com/pydata/pandas/issues/371
+.. _GH629: https://github.com/pydata/pandas/issues/629
+.. _GH170: https://github.com/pydata/pandas/issues/170
+.. _GH166: https://github.com/pydata/pandas/issues/166
+.. _GH628: https://github.com/pydata/pandas/issues/628
+.. _GH658: https://github.com/pydata/pandas/issues/658
+.. _GH647: https://github.com/pydata/pandas/issues/647
+.. _GH288: https://github.com/pydata/pandas/issues/288
+.. _GH699: https://github.com/pydata/pandas/issues/699
+.. _GH657: https://github.com/pydata/pandas/issues/657
+.. _GH740: https://github.com/pydata/pandas/issues/740
+.. _GH731: https://github.com/pydata/pandas/issues/731
+.. _GH737: https://github.com/pydata/pandas/issues/737
+.. _GH700: https://github.com/pydata/pandas/issues/700
+.. _GH328: https://github.com/pydata/pandas/issues/328
+.. _GH316: https://github.com/pydata/pandas/issues/316
+.. _GH395: https://github.com/pydata/pandas/issues/395
+.. _GH734: https://github.com/pydata/pandas/issues/734
+.. _GH497: https://github.com/pydata/pandas/issues/497
+.. _GH496: https://github.com/pydata/pandas/issues/496
+.. _GH492: https://github.com/pydata/pandas/issues/492
+.. _GH489: https://github.com/pydata/pandas/issues/489
+.. _GH536: https://github.com/pydata/pandas/issues/536
+.. _GH476: https://github.com/pydata/pandas/issues/476
+.. _GH545: https://github.com/pydata/pandas/issues/545
+.. _GH374: https://github.com/pydata/pandas/issues/374
+.. _GH586: https://github.com/pydata/pandas/issues/586
+.. _GH540: https://github.com/pydata/pandas/issues/540
+.. _GH621: https://github.com/pydata/pandas/issues/621
+.. _GH507: https://github.com/pydata/pandas/issues/507
+.. _GH477: https://github.com/pydata/pandas/issues/477
+.. _GH659: https://github.com/pydata/pandas/issues/659
+.. _GH668: https://github.com/pydata/pandas/issues/668
+.. _GH684: https://github.com/pydata/pandas/issues/684
+.. _GH745: https://github.com/pydata/pandas/issues/745
+.. _GH765: https://github.com/pydata/pandas/issues/765
+.. _GH654: https://github.com/pydata/pandas/issues/654
+.. _GH714: https://github.com/pydata/pandas/issues/714
+.. _GH768: https://github.com/pydata/pandas/issues/768
+.. _GH495: https://github.com/pydata/pandas/issues/495
+.. _GH486: https://github.com/pydata/pandas/issues/486
+.. _GH487: https://github.com/pydata/pandas/issues/487
+.. _GH501: https://github.com/pydata/pandas/issues/501
+.. _GH500: https://github.com/pydata/pandas/issues/500
+.. _GH491: https://github.com/pydata/pandas/issues/491
+.. _GH490: https://github.com/pydata/pandas/issues/490
+.. _GH488: https://github.com/pydata/pandas/issues/488
+.. _GH510: https://github.com/pydata/pandas/issues/510
+.. _GH505: https://github.com/pydata/pandas/issues/505
+.. _GH529: https://github.com/pydata/pandas/issues/529
+.. _GH548: https://github.com/pydata/pandas/issues/548
+.. _GH546: https://github.com/pydata/pandas/issues/546
+.. _GH551: https://github.com/pydata/pandas/issues/551
+.. _GH527: https://github.com/pydata/pandas/issues/527
+.. _GH449: https://github.com/pydata/pandas/issues/449
+.. _GH583: https://github.com/pydata/pandas/issues/583
+.. _GH601: https://github.com/pydata/pandas/issues/601
+.. _GH612: https://github.com/pydata/pandas/issues/612
+.. _GH611: https://github.com/pydata/pandas/issues/611
+.. _GH666: https://github.com/pydata/pandas/issues/666
+.. _GH672: https://github.com/pydata/pandas/issues/672
+.. _GH669: https://github.com/pydata/pandas/issues/669
+.. _GH678: https://github.com/pydata/pandas/issues/678
+.. _GH687: https://github.com/pydata/pandas/issues/687
+.. _GH683: https://github.com/pydata/pandas/issues/683
+.. _GH707: https://github.com/pydata/pandas/issues/707
+.. _GH730: https://github.com/pydata/pandas/issues/730
+.. _GH759: https://github.com/pydata/pandas/issues/759
+.. _GH662: https://github.com/pydata/pandas/issues/662
+.. _GH663: https://github.com/pydata/pandas/issues/663
+.. _GH670: https://github.com/pydata/pandas/issues/670
+.. _GH676: https://github.com/pydata/pandas/issues/676
+.. _GH705: https://github.com/pydata/pandas/issues/705
+.. _GH717: https://github.com/pydata/pandas/issues/717
+.. _GH738: https://github.com/pydata/pandas/issues/738
+.. _GH708: https://github.com/pydata/pandas/issues/708
+.. _GH719: https://github.com/pydata/pandas/issues/719
+.. _GH747: https://github.com/pydata/pandas/issues/747
+.. _GH748: https://github.com/pydata/pandas/issues/748
+.. _GH753: https://github.com/pydata/pandas/issues/753
+.. _GH742: https://github.com/pydata/pandas/issues/742
+.. _GH726: https://github.com/pydata/pandas/issues/726
+.. _GH440: https://github.com/pydata/pandas/issues/440
+.. _GH764: https://github.com/pydata/pandas/issues/764
+.. _GH770: https://github.com/pydata/pandas/issues/770
+.. _GH771: https://github.com/pydata/pandas/issues/771
+
+
pandas 0.6.1
============
@@ -1328,93 +2066,93 @@ pandas 0.6.1
- Rename `names` argument in DataFrame.from_records to `columns`. Add
deprecation warning
- Boolean get/set operations on Series with boolean Series will reindex
- instead of requiring that the indexes be exactly equal (GH #429)
+ instead of requiring that the indexes be exactly equal (GH429_)
**New features / modules**
- Can pass Series to DataFrame.append with ignore_index=True for appending a
- single row (GH #430)
+ single row (GH430_)
- Add Spearman and Kendall correlation options to Series.corr and
- DataFrame.corr (GH #428)
+ DataFrame.corr (GH428_)
- Add new `get_value` and `set_value` methods to Series, DataFrame, and Panel
to very low-overhead access to scalar elements. df.get_value(row, column)
- is about 3x faster than df[column][row] by handling fewer cases (GH #437,
- #438). Add similar methods to sparse data structures for compatibility
- - Add Qt table widget to sandbox (PR #435)
- - DataFrame.align can accept Series arguments, add axis keyword (GH #461)
+ is about 3x faster than df[column][row] by handling fewer cases (GH437_,
+ GH438_). Add similar methods to sparse data structures for compatibility
+ - Add Qt table widget to sandbox (GH435_)
+ - DataFrame.align can accept Series arguments, add axis keyword (GH461_)
- Implement new SparseList and SparseArray data structures. SparseSeries now
- derives from SparseArray (GH #463)
- - max_columns / max_rows options in set_printoptions (PR #453)
+ derives from SparseArray (GH463_)
+ - max_columns / max_rows options in set_printoptions (GH453_)
- Implement Series.rank and DataFrame.rank, fast versions of
- scipy.stats.rankdata (GH #428)
- - Implement DataFrame.from_items alternate constructor (GH #444)
+ scipy.stats.rankdata (GH428_)
+ - Implement DataFrame.from_items alternate constructor (GH444_)
- DataFrame.convert_objects method for inferring better dtypes for object
- columns (GH #302)
+ columns (GH302_)
- Add rolling_corr_pairwise function for computing Panel of correlation
- matrices (GH #189)
+ matrices (GH189_)
- Add `margins` option to `pivot_table` for computing subgroup aggregates (GH
- #114)
- - Add `Series.from_csv` function (PR #482)
+ GH114_)
+ - Add `Series.from_csv` function (GH482_)
**Improvements to existing features**
- Improve memory usage of `DataFrame.describe` (do not copy data
- unnecessarily) (PR #425)
+ unnecessarily) (GH425_)
- Use same formatting function for outputting floating point Series to console
- as in DataFrame (PR #420)
- - DataFrame.delevel will try to infer better dtype for new columns (GH #440)
+ as in DataFrame (GH420_)
+ - DataFrame.delevel will try to infer better dtype for new columns (GH440_)
- Exclude non-numeric types in DataFrame.{corr, cov}
- - Override Index.astype to enable dtype casting (GH #412)
- - Use same float formatting function for Series.__repr__ (PR #420)
- - Use available console width to output DataFrame columns (PR #453)
- - Accept ndarrays when setting items in Panel (GH #452)
+ - Override Index.astype to enable dtype casting (GH412_)
+ - Use same float formatting function for Series.__repr__ (GH420_)
+ - Use available console width to output DataFrame columns (GH453_)
+ - Accept ndarrays when setting items in Panel (GH452_)
- Infer console width when printing __repr__ of DataFrame to console (PR
- #453)
+ GH453_)
- Optimize scalar value lookups in the general case by 25% or more in Series
and DataFrame
- Can pass DataFrame/DataFrame and DataFrame/Series to
- rolling_corr/rolling_cov (GH #462)
+ rolling_corr/rolling_cov (GH462_)
- Fix performance regression in cross-sectional count in DataFrame, affecting
DataFrame.dropna speed
- Column deletion in DataFrame copies no data (computes views on blocks) (GH
- #158)
+ GH158_)
- MultiIndex.get_level_values can take the level name
- More helpful error message when DataFrame.plot fails on one of the columns
- (GH #478)
+ (GH478_)
- Improve performance of DataFrame.{index, columns} attribute lookup
**Bug fixes**
- Fix O(K^2) memory leak caused by inserting many columns without
- consolidating, had been present since 0.4.0 (GH #467)
+ consolidating, had been present since 0.4.0 (GH467_)
- `DataFrame.count` should return Series with zero instead of NA with length-0
- axis (GH #423)
- - Fix Yahoo! Finance API usage in pandas.io.data (GH #419, PR #427)
- - Fix upstream bug causing failure in Series.align with empty Series (GH #434)
+ axis (GH423_)
+ - Fix Yahoo! Finance API usage in pandas.io.data (GH419_, GH427_)
+ - Fix upstream bug causing failure in Series.align with empty Series (GH434_)
- Function passed to DataFrame.apply can return a list, as long as it's the
- right length. Regression from 0.4 (GH #432)
- - Don't "accidentally" upcast scalar values when indexing using .ix (GH #431)
+ right length. Regression from 0.4 (GH432_)
+ - Don't "accidentally" upcast scalar values when indexing using .ix (GH431_)
- Fix groupby exception raised with as_index=False and single column selected
- (GH #421)
- - Implement DateOffset.__ne__ causing downstream bug (GH #456)
+ (GH421_)
+ - Implement DateOffset.__ne__ causing downstream bug (GH456_)
- Fix __doc__-related issue when converting py -> pyo with py2exe
- Bug fix in left join Cython code with duplicate monotonic labels
- - Fix bug when unstacking multiple levels described in #451
- - Exclude NA values in dtype=object arrays, regression from 0.5.0 (GH #469)
+ - Fix bug when unstacking multiple levels described in GH451_
+ - Exclude NA values in dtype=object arrays, regression from 0.5.0 (GH469_)
- Use Cython map_infer function in DataFrame.applymap to properly infer
output type, handle tuple return values and other things that were breaking
- (GH #465)
- - Handle floating point index values in HDFStore (GH #454)
+ (GH465_)
+ - Handle floating point index values in HDFStore (GH454_)
- Fixed stale column reference bug (cached Series object) caused by type
- change / item deletion in DataFrame (GH #473)
+ change / item deletion in DataFrame (GH473_)
- Index.get_loc should always raise Exception when there are duplicates
- - Handle differently-indexed Series input to DataFrame constructor (GH #475)
+ - Handle differently-indexed Series input to DataFrame constructor (GH475_)
- Omit nuisance columns in multi-groupby with Python function
- Buglet in handling of single grouping in general apply
- Handle type inference properly when passing list of lists or tuples to
- DataFrame constructor (GH #484)
+ DataFrame constructor (GH484_)
- Preserve Index / MultiIndex names in GroupBy.apply concatenation step (GH
- #481)
+ GH481_)
Thanks
------
@@ -1435,6 +2173,47 @@ Thanks
- Chris Uga
- Dieter Vandenbussche
+.. _GH429: https://github.com/pydata/pandas/issues/429
+.. _GH430: https://github.com/pydata/pandas/issues/430
+.. _GH428: https://github.com/pydata/pandas/issues/428
+.. _GH437: https://github.com/pydata/pandas/issues/437
+.. _GH438: https://github.com/pydata/pandas/issues/438
+.. _GH435: https://github.com/pydata/pandas/issues/435
+.. _GH461: https://github.com/pydata/pandas/issues/461
+.. _GH463: https://github.com/pydata/pandas/issues/463
+.. _GH453: https://github.com/pydata/pandas/issues/453
+.. _GH444: https://github.com/pydata/pandas/issues/444
+.. _GH302: https://github.com/pydata/pandas/issues/302
+.. _GH189: https://github.com/pydata/pandas/issues/189
+.. _GH114: https://github.com/pydata/pandas/issues/114
+.. _GH482: https://github.com/pydata/pandas/issues/482
+.. _GH425: https://github.com/pydata/pandas/issues/425
+.. _GH420: https://github.com/pydata/pandas/issues/420
+.. _GH440: https://github.com/pydata/pandas/issues/440
+.. _GH412: https://github.com/pydata/pandas/issues/412
+.. _GH452: https://github.com/pydata/pandas/issues/452
+.. _GH462: https://github.com/pydata/pandas/issues/462
+.. _GH158: https://github.com/pydata/pandas/issues/158
+.. _GH478: https://github.com/pydata/pandas/issues/478
+.. _GH467: https://github.com/pydata/pandas/issues/467
+.. _GH423: https://github.com/pydata/pandas/issues/423
+.. _GH419: https://github.com/pydata/pandas/issues/419
+.. _GH427: https://github.com/pydata/pandas/issues/427
+.. _GH434: https://github.com/pydata/pandas/issues/434
+.. _GH432: https://github.com/pydata/pandas/issues/432
+.. _GH431: https://github.com/pydata/pandas/issues/431
+.. _GH421: https://github.com/pydata/pandas/issues/421
+.. _GH456: https://github.com/pydata/pandas/issues/456
+.. _GH451: https://github.com/pydata/pandas/issues/451
+.. _GH469: https://github.com/pydata/pandas/issues/469
+.. _GH465: https://github.com/pydata/pandas/issues/465
+.. _GH454: https://github.com/pydata/pandas/issues/454
+.. _GH473: https://github.com/pydata/pandas/issues/473
+.. _GH475: https://github.com/pydata/pandas/issues/475
+.. _GH484: https://github.com/pydata/pandas/issues/484
+.. _GH481: https://github.com/pydata/pandas/issues/481
+
+
pandas 0.6.0
============
@@ -1443,86 +2222,86 @@ pandas 0.6.0
**API Changes**
- Arithmetic methods like `sum` will attempt to sum dtype=object values by
- default instead of excluding them (GH #382)
+ default instead of excluding them (GH382_)
**New features / modules**
- Add `melt` function to `pandas.core.reshape`
- Add `level` parameter to group by level in Series and DataFrame
- descriptive statistics (PR #313)
+ descriptive statistics (GH313_)
- Add `head` and `tail` methods to Series, analogous to to DataFrame (PR
- #296)
+ GH296_)
- Add `Series.isin` function which checks if each value is contained in a
- passed sequence (GH #289)
+ passed sequence (GH289_)
- Add `float_format` option to `Series.to_string`
- - Add `skip_footer` (GH #291) and `converters` (GH #343) options to
+ - Add `skip_footer` (GH291_) and `converters` (GH343_) options to
`read_csv` and `read_table`
- Add proper, tested weighted least squares to standard and panel OLS (GH
- #303)
+ GH303_)
- Add `drop_duplicates` and `duplicated` functions for removing duplicate
- DataFrame rows and checking for duplicate rows, respectively (GH #319)
- - Implement logical (boolean) operators &, |, ^ on DataFrame (GH #347)
+ DataFrame rows and checking for duplicate rows, respectively (GH319_)
+ - Implement logical (boolean) operators &, |, ^ on DataFrame (GH347_)
- Add `Series.mad`, mean absolute deviation, matching DataFrame
- - Add `QuarterEnd` DateOffset (PR #321)
+ - Add `QuarterEnd` DateOffset (GH321_)
- Add matrix multiplication function `dot` to DataFrame (GH #65)
- Add `orient` option to `Panel.from_dict` to ease creation of mixed-type
- Panels (GH #359, #301)
+ Panels (GH359_, GH301_)
- Add `DataFrame.from_dict` with similar `orient` option
- Can now pass list of tuples or list of lists to `DataFrame.from_records`
- for fast conversion to DataFrame (GH #357)
+ for fast conversion to DataFrame (GH357_)
- Can pass multiple levels to groupby, e.g. `df.groupby(level=[0, 1])` (GH
- #103)
- - Can sort by multiple columns in `DataFrame.sort_index` (GH #92, PR #362)
+ GH103_)
+ - Can sort by multiple columns in `DataFrame.sort_index` (GH #92, GH362_)
- Add fast `get_value` and `put_value` methods to DataFrame and
- micro-performance tweaks (GH #360)
- - Add `cov` instance methods to Series and DataFrame (GH #194, PR #362)
- - Add bar plot option to `DataFrame.plot` (PR #348)
+ micro-performance tweaks (GH360_)
+ - Add `cov` instance methods to Series and DataFrame (GH194_, GH362_)
+ - Add bar plot option to `DataFrame.plot` (GH348_)
- Add `idxmin` and `idxmax` functions to Series and DataFrame for computing
- index labels achieving maximum and minimum values (PR #286)
+ index labels achieving maximum and minimum values (GH286_)
- Add `read_clipboard` function for parsing DataFrame from OS clipboard,
- should work across platforms (GH #300)
- - Add `nunique` function to Series for counting unique elements (GH #297)
- - DataFrame constructor will use Series name if no columns passed (GH #373)
+ should work across platforms (GH300_)
+ - Add `nunique` function to Series for counting unique elements (GH297_)
+ - DataFrame constructor will use Series name if no columns passed (GH373_)
- Support regular expressions and longer delimiters in read_table/read_csv,
- but does not handle quoted strings yet (GH #364)
- - Add `DataFrame.to_html` for formatting DataFrame to HTML (PR #387)
+ but does not handle quoted strings yet (GH364_)
+ - Add `DataFrame.to_html` for formatting DataFrame to HTML (GH387_)
- MaskedArray can be passed to DataFrame constructor and masked values will be
- converted to NaN (PR #396)
- - Add `DataFrame.boxplot` function (GH #368, others)
- - Can pass extra args, kwds to DataFrame.apply (GH #376)
+ converted to NaN (GH396_)
+ - Add `DataFrame.boxplot` function (GH368_, others)
+ - Can pass extra args, kwds to DataFrame.apply (GH376_)
**Improvements to existing features**
- - Raise more helpful exception if date parsing fails in DateRange (GH #298)
- - Vastly improved performance of GroupBy on axes with a MultiIndex (GH #299)
- - Print level names in hierarchical index in Series repr (GH #305)
+ - Raise more helpful exception if date parsing fails in DateRange (GH298_)
+ - Vastly improved performance of GroupBy on axes with a MultiIndex (GH299_)
+ - Print level names in hierarchical index in Series repr (GH305_)
- Return DataFrame when performing GroupBy on selected column and
- as_index=False (GH #308)
- - Can pass vector to `on` argument in `DataFrame.join` (GH #312)
+ as_index=False (GH308_)
+ - Can pass vector to `on` argument in `DataFrame.join` (GH312_)
- Don't show Series name if it's None in the repr, also omit length for short
- Series (GH #317)
+ Series (GH317_)
- Show legend by default in `DataFrame.plot`, add `legend` boolean flag (GH
- #324)
+ GH324_)
- Significantly improved performance of `Series.order`, which also makes
- np.unique called on a Series faster (GH #327)
- - Faster cythonized count by level in Series and DataFrame (GH #341)
- - Raise exception if dateutil 2.0 installed on Python 2.x runtime (GH #346)
+ np.unique called on a Series faster (GH327_)
+ - Faster cythonized count by level in Series and DataFrame (GH341_)
+ - Raise exception if dateutil 2.0 installed on Python 2.x runtime (GH346_)
- Significant GroupBy performance enhancement with multiple keys with many
"empty" combinations
- New Cython vectorized function `map_infer` speeds up `Series.apply` and
`Series.map` significantly when passed elementwise Python function,
- motivated by PR #355
+ motivated by GH355_
- Cythonized `cache_readonly`, resulting in substantial micro-performance
- enhancements throughout the codebase (GH #361)
+ enhancements throughout the codebase (GH361_)
- Special Cython matrix iterator for applying arbitrary reduction operations
- with 3-5x better performance than `np.apply_along_axis` (GH #309)
+ with 3-5x better performance than `np.apply_along_axis` (GH309_)
- Add `raw` option to `DataFrame.apply` for getting better performance when
- the passed function only requires an ndarray (GH #309)
+ the passed function only requires an ndarray (GH309_)
- Improve performance of `MultiIndex.from_tuples`
- - Can pass multiple levels to `stack` and `unstack` (GH #370)
- - Can pass multiple values columns to `pivot_table` (GH #381)
- - Can call `DataFrame.delevel` with standard Index with name set (GH #393)
- - Use Series name in GroupBy for result index (GH #363)
+ - Can pass multiple levels to `stack` and `unstack` (GH370_)
+ - Can pass multiple values columns to `pivot_table` (GH381_)
+ - Can call `DataFrame.delevel` with standard Index with name set (GH393_)
+ - Use Series name in GroupBy for result index (GH363_)
- Refactor Series/DataFrame stat methods to use common set of NaN-friendly
function
- Handle NumPy scalar integers at C level in Cython conversion routines
@@ -1530,54 +2309,54 @@ pandas 0.6.0
**Bug fixes**
- Fix bug in `DataFrame.to_csv` when writing a DataFrame with an index
- name (GH #290)
+ name (GH290_)
- DataFrame should clear its Series caches on consolidation, was causing
- "stale" Series to be returned in some corner cases (GH #304)
- - DataFrame constructor failed if a column had a list of tuples (GH #293)
+ "stale" Series to be returned in some corner cases (GH304_)
+ - DataFrame constructor failed if a column had a list of tuples (GH293_)
- Ensure that `Series.apply` always returns a Series and implement
- `Series.round` (GH #314)
- - Support boolean columns in Cythonized groupby functions (GH #315)
+ `Series.round` (GH314_)
+ - Support boolean columns in Cythonized groupby functions (GH315_)
- `DataFrame.describe` should not fail if there are no numeric columns,
- instead return categorical describe (GH #323)
+ instead return categorical describe (GH323_)
- Fixed bug which could cause columns to be printed in wrong order in
- `DataFrame.to_string` if specific list of columns passed (GH #325)
- - Fix legend plotting failure if DataFrame columns are integers (GH #326)
+ `DataFrame.to_string` if specific list of columns passed (GH325_)
+ - Fix legend plotting failure if DataFrame columns are integers (GH326_)
- Shift start date back by one month for Yahoo! Finance API in pandas.io.data
- (GH #329)
- - Fix `DataFrame.join` failure on unconsolidated inputs (GH #331)
- - DataFrame.min/max will no longer fail on mixed-type DataFrame (GH #337)
+ (GH329_)
+ - Fix `DataFrame.join` failure on unconsolidated inputs (GH331_)
+ - DataFrame.min/max will no longer fail on mixed-type DataFrame (GH337_)
- Fix `read_csv` / `read_table` failure when passing list to index_col that is
- not in ascending order (GH #349)
+ not in ascending order (GH349_)
- Fix failure passing Int64Index to Index.union when both are monotonic
- Fix error when passing SparseSeries to (dense) DataFrame constructor
- - Added missing bang at top of setup.py (GH #352)
+ - Added missing bang at top of setup.py (GH352_)
- Change `is_monotonic` on MultiIndex so it properly compares the tuples
- - Fix MultiIndex outer join logic (GH #351)
- - Set index name attribute with single-key groupby (GH #358)
+ - Fix MultiIndex outer join logic (GH351_)
+ - Set index name attribute with single-key groupby (GH358_)
- Bug fix in reflexive binary addition in Series and DataFrame for
- non-commutative operations (like string concatenation) (GH #353)
- - setupegg.py will invoke Cython (GH #192)
- - Fix block consolidation bug after inserting column into MultiIndex (GH #366)
- - Fix bug in join operations between Index and Int64Index (GH #367)
- - Handle min_periods=0 case in moving window functions (GH #365)
- - Fixed corner cases in DataFrame.apply/pivot with empty DataFrame (GH #378)
+ non-commutative operations (like string concatenation) (GH353_)
+ - setupegg.py will invoke Cython (GH192_)
+ - Fix block consolidation bug after inserting column into MultiIndex (GH366_)
+ - Fix bug in join operations between Index and Int64Index (GH367_)
+ - Handle min_periods=0 case in moving window functions (GH365_)
+ - Fixed corner cases in DataFrame.apply/pivot with empty DataFrame (GH378_)
- Fixed repr exception when Series name is a tuple
- - Always return DateRange from `asfreq` (GH #390)
- - Pass level names to `swaplavel` (GH #379)
- - Don't lose index names in `MultiIndex.droplevel` (GH #394)
+ - Always return DateRange from `asfreq` (GH390_)
+ - Pass level names to `swaplavel` (GH379_)
+ - Don't lose index names in `MultiIndex.droplevel` (GH394_)
- Infer more proper return type in `DataFrame.apply` when no columns or rows
- depending on whether the passed function is a reduction (GH #389)
+ depending on whether the passed function is a reduction (GH389_)
- Always return NA/NaN from Series.min/max and DataFrame.min/max when all of a
- row/column/values are NA (GH #384)
- - Enable partial setting with .ix / advanced indexing (GH #397)
+ row/column/values are NA (GH384_)
+ - Enable partial setting with .ix / advanced indexing (GH397_)
- Handle mixed-type DataFrames correctly in unstack, do not lose type
- information (GH #403)
+ information (GH403_)
- Fix integer name formatting bug in Index.format and in Series.__repr__
- - Handle label types other than string passed to groupby (GH #405)
+ - Handle label types other than string passed to groupby (GH405_)
- Fix bug in .ix-based indexing with partial retrieval when a label is not
contained in a level
- - Index name was not being pickled (GH #408)
- - Level name should be passed to result index in GroupBy.apply (GH #416)
+ - Index name was not being pickled (GH408_)
+ - Level name should be passed to result index in GroupBy.apply (GH416_)
Thanks
------
@@ -1602,6 +2381,83 @@ Thanks
- carljv
- rsamson
+.. _GH382: https://github.com/pydata/pandas/issues/382
+.. _GH313: https://github.com/pydata/pandas/issues/313
+.. _GH296: https://github.com/pydata/pandas/issues/296
+.. _GH289: https://github.com/pydata/pandas/issues/289
+.. _GH291: https://github.com/pydata/pandas/issues/291
+.. _GH343: https://github.com/pydata/pandas/issues/343
+.. _GH303: https://github.com/pydata/pandas/issues/303
+.. _GH319: https://github.com/pydata/pandas/issues/319
+.. _GH347: https://github.com/pydata/pandas/issues/347
+.. _GH321: https://github.com/pydata/pandas/issues/321
+.. _GH359: https://github.com/pydata/pandas/issues/359
+.. _GH301: https://github.com/pydata/pandas/issues/301
+.. _GH357: https://github.com/pydata/pandas/issues/357
+.. _GH103: https://github.com/pydata/pandas/issues/103
+.. _GH362: https://github.com/pydata/pandas/issues/362
+.. _GH360: https://github.com/pydata/pandas/issues/360
+.. _GH194: https://github.com/pydata/pandas/issues/194
+.. _GH348: https://github.com/pydata/pandas/issues/348
+.. _GH286: https://github.com/pydata/pandas/issues/286
+.. _GH300: https://github.com/pydata/pandas/issues/300
+.. _GH297: https://github.com/pydata/pandas/issues/297
+.. _GH373: https://github.com/pydata/pandas/issues/373
+.. _GH364: https://github.com/pydata/pandas/issues/364
+.. _GH387: https://github.com/pydata/pandas/issues/387
+.. _GH396: https://github.com/pydata/pandas/issues/396
+.. _GH368: https://github.com/pydata/pandas/issues/368
+.. _GH376: https://github.com/pydata/pandas/issues/376
+.. _GH298: https://github.com/pydata/pandas/issues/298
+.. _GH299: https://github.com/pydata/pandas/issues/299
+.. _GH305: https://github.com/pydata/pandas/issues/305
+.. _GH308: https://github.com/pydata/pandas/issues/308
+.. _GH312: https://github.com/pydata/pandas/issues/312
+.. _GH317: https://github.com/pydata/pandas/issues/317
+.. _GH324: https://github.com/pydata/pandas/issues/324
+.. _GH327: https://github.com/pydata/pandas/issues/327
+.. _GH341: https://github.com/pydata/pandas/issues/341
+.. _GH346: https://github.com/pydata/pandas/issues/346
+.. _GH355: https://github.com/pydata/pandas/issues/355
+.. _GH361: https://github.com/pydata/pandas/issues/361
+.. _GH309: https://github.com/pydata/pandas/issues/309
+.. _GH370: https://github.com/pydata/pandas/issues/370
+.. _GH381: https://github.com/pydata/pandas/issues/381
+.. _GH393: https://github.com/pydata/pandas/issues/393
+.. _GH363: https://github.com/pydata/pandas/issues/363
+.. _GH290: https://github.com/pydata/pandas/issues/290
+.. _GH304: https://github.com/pydata/pandas/issues/304
+.. _GH293: https://github.com/pydata/pandas/issues/293
+.. _GH314: https://github.com/pydata/pandas/issues/314
+.. _GH315: https://github.com/pydata/pandas/issues/315
+.. _GH323: https://github.com/pydata/pandas/issues/323
+.. _GH325: https://github.com/pydata/pandas/issues/325
+.. _GH326: https://github.com/pydata/pandas/issues/326
+.. _GH329: https://github.com/pydata/pandas/issues/329
+.. _GH331: https://github.com/pydata/pandas/issues/331
+.. _GH337: https://github.com/pydata/pandas/issues/337
+.. _GH349: https://github.com/pydata/pandas/issues/349
+.. _GH352: https://github.com/pydata/pandas/issues/352
+.. _GH351: https://github.com/pydata/pandas/issues/351
+.. _GH358: https://github.com/pydata/pandas/issues/358
+.. _GH353: https://github.com/pydata/pandas/issues/353
+.. _GH192: https://github.com/pydata/pandas/issues/192
+.. _GH366: https://github.com/pydata/pandas/issues/366
+.. _GH367: https://github.com/pydata/pandas/issues/367
+.. _GH365: https://github.com/pydata/pandas/issues/365
+.. _GH378: https://github.com/pydata/pandas/issues/378
+.. _GH390: https://github.com/pydata/pandas/issues/390
+.. _GH379: https://github.com/pydata/pandas/issues/379
+.. _GH394: https://github.com/pydata/pandas/issues/394
+.. _GH389: https://github.com/pydata/pandas/issues/389
+.. _GH384: https://github.com/pydata/pandas/issues/384
+.. _GH397: https://github.com/pydata/pandas/issues/397
+.. _GH403: https://github.com/pydata/pandas/issues/403
+.. _GH405: https://github.com/pydata/pandas/issues/405
+.. _GH408: https://github.com/pydata/pandas/issues/408
+.. _GH416: https://github.com/pydata/pandas/issues/416
+
+
pandas 0.5.0
============
@@ -1626,18 +2482,18 @@ feedback on the library.
`index_col` is now None. To use one or more of the columns as the resulting
DataFrame's index, these must be explicitly specified now
- Parsing functions like `read_csv` no longer parse dates by default (GH
- #225)
+ GH225_)
- Removed `weights` option in panel regression which was not doing anything
- principled (GH #155)
+ principled (GH155_)
- Changed `buffer` argument name in `Series.to_string` to `buf`
- `Series.to_string` and `DataFrame.to_string` now return strings by default
instead of printing to sys.stdout
- Deprecated `nanRep` argument in various `to_string` and `to_csv` functions
- in favor of `na_rep`. Will be removed in 0.6 (GH #275)
+ in favor of `na_rep`. Will be removed in 0.6 (GH275_)
- Renamed `delimiter` to `sep` in `DataFrame.from_csv` for consistency
- Changed order of `Series.clip` arguments to match those of `numpy.clip` and
added (unimplemented) `out` argument so `numpy.clip` can be called on a
- Series (GH #272)
+ Series (GH272_)
- Series functions renamed (and thus deprecated) in 0.4 series have been
removed:
@@ -1692,21 +2548,21 @@ feedback on the library.
optionally try to parse dates in the index columns
- Add `nrows`, `chunksize`, and `iterator` arguments to `read_csv` and
`read_table`. The last two return a new `TextParser` class capable of
- lazily iterating through chunks of a flat file (GH #242)
- - Added ability to join on multiple columns in `DataFrame.join` (GH #214)
+ lazily iterating through chunks of a flat file (GH242_)
+ - Added ability to join on multiple columns in `DataFrame.join` (GH214_)
- Added private `_get_duplicates` function to `Index` for identifying
duplicate values more easily
- Added column attribute access to DataFrame, e.g. df.A equivalent to df['A']
- if 'A' is a column in the DataFrame (PR #213)
- - Added IPython tab completion hook for DataFrame columns. (PR #233, GH #230)
- - Implement `Series.describe` for Series containing objects (PR #241)
- - Add inner join option to `DataFrame.join` when joining on key(s) (GH #248)
+ if 'A' is a column in the DataFrame (GH213_)
+ - Added IPython tab completion hook for DataFrame columns. (GH233_, GH230_)
+ - Implement `Series.describe` for Series containing objects (GH241_)
+ - Add inner join option to `DataFrame.join` when joining on key(s) (GH248_)
- Can select set of DataFrame columns by passing a list to `__getitem__` (GH
- #253)
+ GH253_)
- Can use & and | to intersection / union Index objects, respectively (GH
- #261)
- - Added `pivot_table` convenience function to pandas namespace (GH #234)
- - Implemented `Panel.rename_axis` function (GH #243)
+ GH261_)
+ - Added `pivot_table` convenience function to pandas namespace (GH234_)
+ - Implemented `Panel.rename_axis` function (GH243_)
- DataFrame will show index level names in console output
- Implemented `Panel.take`
- Add `set_eng_float_format` function for setting alternate DataFrame
@@ -1725,20 +2581,20 @@ feedback on the library.
rather than deferring the check until later
- Refactored merging / joining code into a tidy class and disabled unnecessary
computations in the float/object case, thus getting about 10% better
- performance (GH #211)
+ performance (GH211_)
- Improved speed of `DataFrame.xs` on mixed-type DataFrame objects by about
- 5x, regression from 0.3.0 (GH #215)
+ 5x, regression from 0.3.0 (GH215_)
- With new `DataFrame.align` method, speeding up binary operations between
differently-indexed DataFrame objects by 10-25%.
- - Significantly sped up conversion of nested dict into DataFrame (GH #212)
+ - Significantly sped up conversion of nested dict into DataFrame (GH212_)
- Can pass hierarchical index level name to `groupby` instead of the level
- number if desired (GH #223)
- - Add support for different delimiters in `DataFrame.to_csv` (PR #244)
+ number if desired (GH223_)
+ - Add support for different delimiters in `DataFrame.to_csv` (GH244_)
- Add more helpful error message when importing pandas post-installation from
- the source directory (GH #250)
+ the source directory (GH250_)
- Significantly speed up DataFrame `__repr__` and `count` on large mixed-type
DataFrame objects
- - Better handling of pyx file dependencies in Cython module build (GH #271)
+ - Better handling of pyx file dependencies in Cython module build (GH271_)
**Bug fixes**
@@ -1747,32 +2603,32 @@ feedback on the library.
representations of integers like 1.0, 2.0, etc.
- "True"/"False" will not get correctly converted to boolean
- Index name attribute will get set when specifying an index column
- - Passing column names should force `header=None` (GH #257)
+ - Passing column names should force `header=None` (GH257_)
- Don't modify passed column names when `index_col` is not
- None (GH #258)
+ None (GH258_)
- Can sniff CSV separator in zip file (since seek is not supported, was
failing before)
- Worked around matplotlib "bug" in which series[:, np.newaxis] fails. Should
- be reported upstream to matplotlib (GH #224)
+ be reported upstream to matplotlib (GH224_)
- DataFrame.iteritems was not returning Series with the name attribute
set. Also neither was DataFrame._series
- - Can store datetime.date objects in HDFStore (GH #231)
+ - Can store datetime.date objects in HDFStore (GH231_)
- Index and Series names are now stored in HDFStore
- Fixed problem in which data would get upcasted to object dtype in
- GroupBy.apply operations (GH #237)
- - Fixed outer join bug with empty DataFrame (GH #238)
- - Can create empty Panel (GH #239)
- - Fix join on single key when passing list with 1 entry (GH #246)
- - Don't raise Exception on plotting DataFrame with an all-NA column (GH #251,
- PR #254)
- - Bug min/max errors when called on integer DataFrames (PR #241)
+ GroupBy.apply operations (GH237_)
+ - Fixed outer join bug with empty DataFrame (GH238_)
+ - Can create empty Panel (GH239_)
+ - Fix join on single key when passing list with 1 entry (GH246_)
+ - Don't raise Exception on plotting DataFrame with an all-NA column (GH251_,
+ GH254_)
+ - Bug min/max errors when called on integer DataFrames (GH241_)
- `DataFrame.iteritems` and `DataFrame._series` not assigning name attribute
- Panel.__repr__ raised exception on length-0 major/minor axes
- `DataFrame.join` on key with empty DataFrame produced incorrect columns
- - Implemented `MultiIndex.diff` (GH #260)
+ - Implemented `MultiIndex.diff` (GH260_)
- `Int64Index.take` and `MultiIndex.take` lost name field, fix downstream
- issue GH #262
- - Can pass list of tuples to `Series` (GH #270)
+ issue GH262_
+ - Can pass list of tuples to `Series` (GH270_)
- Can pass level name to `DataFrame.stack`
- Support set operations between MultiIndex and Index
- Fix many corner cases in MultiIndex set operations
@@ -1782,7 +2638,7 @@ feedback on the library.
- Setting DataFrame index did not cause Series cache to get cleared
- Various int32 -> int64 platform-specific issues
- Don't be too aggressive converting to integer when parsing file with
- MultiIndex (GH #285)
+ MultiIndex (GH285_)
- Fix bug when slicing Series with negative indices before beginning
Thanks
@@ -1794,6 +2650,44 @@ Thanks
- Luca Beltrame
- Wouter Overmeire
+.. _GH225: https://github.com/pydata/pandas/issues/225
+.. _GH155: https://github.com/pydata/pandas/issues/155
+.. _GH275: https://github.com/pydata/pandas/issues/275
+.. _GH272: https://github.com/pydata/pandas/issues/272
+.. _GH242: https://github.com/pydata/pandas/issues/242
+.. _GH214: https://github.com/pydata/pandas/issues/214
+.. _GH213: https://github.com/pydata/pandas/issues/213
+.. _GH233: https://github.com/pydata/pandas/issues/233
+.. _GH230: https://github.com/pydata/pandas/issues/230
+.. _GH241: https://github.com/pydata/pandas/issues/241
+.. _GH248: https://github.com/pydata/pandas/issues/248
+.. _GH253: https://github.com/pydata/pandas/issues/253
+.. _GH261: https://github.com/pydata/pandas/issues/261
+.. _GH234: https://github.com/pydata/pandas/issues/234
+.. _GH243: https://github.com/pydata/pandas/issues/243
+.. _GH211: https://github.com/pydata/pandas/issues/211
+.. _GH215: https://github.com/pydata/pandas/issues/215
+.. _GH212: https://github.com/pydata/pandas/issues/212
+.. _GH223: https://github.com/pydata/pandas/issues/223
+.. _GH244: https://github.com/pydata/pandas/issues/244
+.. _GH250: https://github.com/pydata/pandas/issues/250
+.. _GH271: https://github.com/pydata/pandas/issues/271
+.. _GH257: https://github.com/pydata/pandas/issues/257
+.. _GH258: https://github.com/pydata/pandas/issues/258
+.. _GH224: https://github.com/pydata/pandas/issues/224
+.. _GH231: https://github.com/pydata/pandas/issues/231
+.. _GH237: https://github.com/pydata/pandas/issues/237
+.. _GH238: https://github.com/pydata/pandas/issues/238
+.. _GH239: https://github.com/pydata/pandas/issues/239
+.. _GH246: https://github.com/pydata/pandas/issues/246
+.. _GH251: https://github.com/pydata/pandas/issues/251
+.. _GH254: https://github.com/pydata/pandas/issues/254
+.. _GH260: https://github.com/pydata/pandas/issues/260
+.. _GH262: https://github.com/pydata/pandas/issues/262
+.. _GH270: https://github.com/pydata/pandas/issues/270
+.. _GH285: https://github.com/pydata/pandas/issues/285
+
+
pandas 0.4.3
============
@@ -1808,13 +2702,13 @@ and enhanced features. Also, pandas can now be installed and used on Python 3
**New features / modules**
- - Python 3 support using 2to3 (PR #200, Thomas Kluyver)
+ - Python 3 support using 2to3 (GH200_, Thomas Kluyver)
- Add `name` attribute to `Series` and added relevant logic and tests. Name
now prints as part of `Series.__repr__`
- Add `name` attribute to standard Index so that stacking / unstacking does
not discard names and so that indexed DataFrame objects can be reliably
round-tripped to flat files, pickle, HDF5, etc.
- - Add `isnull` and `notnull` as instance methods on Series (PR #209, GH #203)
+ - Add `isnull` and `notnull` as instance methods on Series (GH209_, GH203_)
**Improvements to existing features**
@@ -1823,7 +2717,7 @@ and enhanced features. Also, pandas can now be installed and used on Python 3
concatenate together
- Altered binary operations on differently-indexed SparseSeries objects to use
the integer-based (dense) alignment logic which is faster with a larger
- number of blocks (GH #205)
+ number of blocks (GH205_)
- Refactored `Series.__repr__` to be a bit more clean and consistent
**API Changes**
@@ -1838,18 +2732,18 @@ and enhanced features. Also, pandas can now be installed and used on Python 3
- Fix broken interaction between `Index` and `Int64Index` when calling
intersection. Implement `Int64Index.intersection`
- - `MultiIndex.sortlevel` discarded the level names (GH #202)
+ - `MultiIndex.sortlevel` discarded the level names (GH202_)
- Fix bugs in groupby, join, and append due to improper concatenation of
- `MultiIndex` objects (GH #201)
+ `MultiIndex` objects (GH201_)
- Fix regression from 0.4.1, `isnull` and `notnull` ceased to work on other
kinds of Python scalar objects like `datetime.datetime`
- Raise more helpful exception when attempting to write empty DataFrame or
- LongPanel to `HDFStore` (GH #204)
+ LongPanel to `HDFStore` (GH204_)
- Use stdlib csv module to properly escape strings with commas in
- `DataFrame.to_csv` (PR #206, Thomas Kluyver)
+ `DataFrame.to_csv` (GH206_, Thomas Kluyver)
- Fix Python ndarray access in Cython code for sparse blocked index integrity
check
- - Fix bug writing Series to CSV in Python 3 (PR #209)
+ - Fix bug writing Series to CSV in Python 3 (GH209_)
- Miscellaneous Python 3 bugfixes
Thanks
@@ -1858,6 +2752,16 @@ Thanks
- Thomas Kluyver
- rsamson
+.. _GH200: https://github.com/pydata/pandas/issues/200
+.. _GH209: https://github.com/pydata/pandas/issues/209
+.. _GH203: https://github.com/pydata/pandas/issues/203
+.. _GH205: https://github.com/pydata/pandas/issues/205
+.. _GH202: https://github.com/pydata/pandas/issues/202
+.. _GH201: https://github.com/pydata/pandas/issues/201
+.. _GH204: https://github.com/pydata/pandas/issues/204
+.. _GH206: https://github.com/pydata/pandas/issues/206
+
+
pandas 0.4.2
============
@@ -1891,20 +2795,20 @@ infrastructure are the main new additions
**Improvements to existing features**
- Improved performance of `isnull` and `notnull`, a regression from v0.3.0
- (GH #187)
+ (GH187_)
- Wrote templating / code generation script to auto-generate Cython code for
various functions which need to be available for the 4 major data types
used in pandas (float64, bool, object, int64)
- Refactored code related to `DataFrame.join` so that intermediate aligned
copies of the data in each `DataFrame` argument do not need to be
- created. Substantial performance increases result (GH #176)
+ created. Substantial performance increases result (GH176_)
- Substantially improved performance of generic `Index.intersection` and
`Index.union`
- Improved performance of `DateRange.union` with overlapping ranges and
non-cacheable offsets (like Minute). Implemented analogous fast
`DateRange.intersection` for overlapping ranges.
- Implemented `BlockManager.take` resulting in significantly faster `take`
- performance on mixed-type `DataFrame` objects (GH #104)
+ performance on mixed-type `DataFrame` objects (GH104_)
- Improved performance of `Series.sort_index`
- Significant groupby performance enhancement: removed unnecessary integrity
checks in DataFrame internals that were slowing down slicing operations to
@@ -1922,11 +2826,11 @@ None
aggregation operations
- Fixed bug in unstacking code manifesting with more than 3 hierarchical
levels
- - Throw exception when step specified in label-based slice (GH #185)
+ - Throw exception when step specified in label-based slice (GH185_)
- Fix isnull to correctly work with np.float32. Fix upstream bug described in
- GH #182
+ GH182_
- Finish implementation of as_index=False in groupby for DataFrame
- aggregation (GH #181)
+ aggregation (GH181_)
- Raise SkipTest for pre-epoch HDFStore failure. Real fix will be sorted out
via datetime64 dtype
@@ -1936,6 +2840,14 @@ Thanks
- Uri Laserson
- Scott Sinclair
+.. _GH187: https://github.com/pydata/pandas/issues/187
+.. _GH176: https://github.com/pydata/pandas/issues/176
+.. _GH104: https://github.com/pydata/pandas/issues/104
+.. _GH185: https://github.com/pydata/pandas/issues/185
+.. _GH182: https://github.com/pydata/pandas/issues/182
+.. _GH181: https://github.com/pydata/pandas/issues/181
+
+
pandas 0.4.1
============
@@ -1951,10 +2863,10 @@ improvements
- Added new `DataFrame` methods `get_dtype_counts` and property `dtypes`
- Setting of values using ``.ix`` indexing attribute in mixed-type DataFrame
- objects has been implemented (fixes GH #135)
+ objects has been implemented (fixes GH135_)
- `read_csv` can read multiple columns into a `MultiIndex`. DataFrame's
`to_csv` method will properly write out a `MultiIndex` which can be read
- back (PR #151, thanks to Skipper Seabold)
+ back (GH151_, thanks to Skipper Seabold)
- Wrote fast time series merging / joining methods in Cython. Will be
integrated later into DataFrame.join and related functions
- Added `ignore_index` option to `DataFrame.append` for combining unindexed
@@ -1965,10 +2877,10 @@ improvements
- Some speed enhancements with internal Index type-checking function
- `DataFrame.rename` has a new `copy` parameter which can rename a DataFrame
in place
- - Enable unstacking by level name (PR #142)
- - Enable sortlevel to work by level name (PR #141)
+ - Enable unstacking by level name (GH142_)
+ - Enable sortlevel to work by level name (GH141_)
- `read_csv` can automatically "sniff" other kinds of delimiters using
- `csv.Sniffer` (PR #146)
+ `csv.Sniffer` (GH146_)
- Improved speed of unit test suite by about 40%
- Exception will not be raised calling `HDFStore.remove` on non-existent node
with where clause
@@ -1984,20 +2896,20 @@ None
- Fixed DataFrame constructor bug causing downstream problems (e.g. .copy()
failing) when passing a Series as the values along with a column name and
index
- - Fixed single-key groupby on DataFrame with as_index=False (GH #160)
- - `Series.shift` was failing on integer Series (GH #154)
+ - Fixed single-key groupby on DataFrame with as_index=False (GH160_)
+ - `Series.shift` was failing on integer Series (GH154_)
- `unstack` methods were producing incorrect output in the case of duplicate
- hierarchical labels. An exception will now be raised (GH #147)
+ hierarchical labels. An exception will now be raised (GH147_)
- Calling `count` with level argument caused reduceat failure or segfault in
- earlier NumPy (GH #169)
+ earlier NumPy (GH169_)
- Fixed `DataFrame.corrwith` to automatically exclude non-numeric data (GH
- #144)
- - Unicode handling bug fixes in `DataFrame.to_string` (GH #138)
+ GH144_)
+ - Unicode handling bug fixes in `DataFrame.to_string` (GH138_)
- Excluding OLS degenerate unit test case that was causing platform specific
- failure (GH #149)
- - Skip blosc-dependent unit tests for PyTables < 2.2 (PR #137)
+ failure (GH149_)
+ - Skip blosc-dependent unit tests for PyTables < 2.2 (GH137_)
- Calling `copy` on `DateRange` did not copy over attributes to the new object
- (GH #168)
+ (GH168_)
- Fix bug in `HDFStore` in which Panel data could be appended to a Table with
different item order, thus resulting in an incorrect result read back
@@ -2009,6 +2921,22 @@ Thanks
- Dan Lovell
- Nick Pentreath
+.. _GH135: https://github.com/pydata/pandas/issues/135
+.. _GH151: https://github.com/pydata/pandas/issues/151
+.. _GH142: https://github.com/pydata/pandas/issues/142
+.. _GH141: https://github.com/pydata/pandas/issues/141
+.. _GH146: https://github.com/pydata/pandas/issues/146
+.. _GH160: https://github.com/pydata/pandas/issues/160
+.. _GH154: https://github.com/pydata/pandas/issues/154
+.. _GH147: https://github.com/pydata/pandas/issues/147
+.. _GH169: https://github.com/pydata/pandas/issues/169
+.. _GH144: https://github.com/pydata/pandas/issues/144
+.. _GH138: https://github.com/pydata/pandas/issues/138
+.. _GH149: https://github.com/pydata/pandas/issues/149
+.. _GH137: https://github.com/pydata/pandas/issues/137
+.. _GH168: https://github.com/pydata/pandas/issues/168
+
+
pandas 0.4.0
============
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index f514139a9170f..b8f3468f82098 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -61,7 +61,7 @@ arise and we wish to also consider that "missing" or "null".
Until recently, for legacy reasons ``inf`` and ``-inf`` were also
considered to be "null" in computations. This is no longer the case by
-default; use the :func: `~pandas.core.common.use_inf_as_null` function to recover it.
+default; use the ``mode.use_inf_as_null`` option to recover it.
.. _missing.isnull:
diff --git a/pandas/core/common.py b/pandas/core/common.py
index d77d413d39571..386343dc38704 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -94,8 +94,8 @@ def isnull_old(obj):
else:
return obj is None
-def use_inf_as_null(flag):
- '''
+def _use_inf_as_null(key):
+ '''Option change callback for null/inf behaviour
Choose which replacement for numpy.isnan / -numpy.isfinite is used.
Parameters
@@ -113,6 +113,7 @@ def use_inf_as_null(flag):
* http://stackoverflow.com/questions/4859217/
programmatically-creating-variables-in-python/4859312#4859312
'''
+ flag = get_option(key)
if flag == True:
globals()['isnull'] = isnull_old
else:
@@ -1179,7 +1180,7 @@ def in_interactive_session():
returns True if running under python/ipython interactive shell
"""
import __main__ as main
- return not hasattr(main, '__file__') or get_option('test.interactive')
+ return not hasattr(main, '__file__') or get_option('mode.sim_interactive')
def in_qtconsole():
"""
diff --git a/pandas/core/config.py b/pandas/core/config.py
index 9bd62c38fa8e5..5a2e6ef093f30 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -25,7 +25,9 @@
- all options in a certain sub - namespace can be reset at once.
- the user can set / get / reset or ask for the description of an option.
- a developer can register and mark an option as deprecated.
-
+- you can register a callback to be invoked when the the option value
+ is set or reset. Changing the stored value is considered misuse, but
+ is not verboten.
Implementation
==============
@@ -54,7 +56,7 @@
import warnings
DeprecatedOption = namedtuple('DeprecatedOption', 'key msg rkey removal_ver')
-RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator')
+RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator cb')
_deprecated_options = {} # holds deprecated option metdata
_registered_options = {} # holds registered option metdata
@@ -105,6 +107,9 @@ def _set_option(pat, value):
root, k = _get_root(key)
root[k] = value
+ if o and o.cb:
+ o.cb(key)
+
def _describe_option(pat='', _print_desc=True):
@@ -270,7 +275,7 @@ def __doc__(self):
######################################################
# Functions for use by pandas developers, in addition to User - api
-def register_option(key, defval, doc='', validator=None):
+def register_option(key, defval, doc='', validator=None, cb=None):
"""Register an option in the package-wide pandas config object
Parameters
@@ -280,6 +285,9 @@ def register_option(key, defval, doc='', validator=None):
doc - a string description of the option
validator - a function of a single argument, should raise `ValueError` if
called with a value which is not a legal value for the option.
+ cb - a function of a single argument "key", which is called
+ immediately after an option value is set/reset. key is
+ the full name of the option.
Returns
-------
@@ -321,7 +329,7 @@ def register_option(key, defval, doc='', validator=None):
# save the option metadata
_registered_options[key] = RegisteredOption(key=key, defval=defval,
- doc=doc, validator=validator)
+ doc=doc, validator=validator,cb=cb)
def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index a854c012226cf..b2443e5b0b14e 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -139,10 +139,27 @@
cf.register_option('expand_frame_repr', True, pc_expand_repr_doc)
cf.register_option('line_width', 80, pc_line_width_doc)
-tc_interactive_doc="""
+tc_sim_interactive_doc="""
: boolean
Default False
Whether to simulate interactive mode for purposes of testing
"""
-with cf.config_prefix('test'):
- cf.register_option('interactive', False, tc_interactive_doc)
+with cf.config_prefix('mode'):
+ cf.register_option('sim_interactive', False, tc_sim_interactive_doc)
+
+use_inf_as_null_doc="""
+: boolean
+ True means treat None, NaN, INF, -INF as null (old way),
+ False means None and NaN are null, but INF, -INF are not null
+ (new way).
+"""
+
+# we don't want to start importing evrything at the global context level
+# or we'll hit circular deps.
+def use_inf_as_null_cb(key):
+ from pandas.core.common import _use_inf_as_null
+ _use_inf_as_null(key)
+
+with cf.config_prefix('mode'):
+ cf.register_option('use_inf_as_null', False, use_inf_as_null_doc,
+ cb=use_inf_as_null_cb)
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index e8be2a5cc9c4c..1646d71164834 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -5,9 +5,10 @@
import unittest
from pandas import Series, DataFrame, date_range, DatetimeIndex
-from pandas.core.common import notnull, isnull, use_inf_as_null
+from pandas.core.common import notnull, isnull
import pandas.core.common as com
import pandas.util.testing as tm
+import pandas.core.config as cf
import numpy as np
@@ -29,15 +30,15 @@ def test_notnull():
assert not notnull(None)
assert not notnull(np.NaN)
- use_inf_as_null(False)
+ cf.set_option("mode.use_inf_as_null",False)
assert notnull(np.inf)
assert notnull(-np.inf)
- use_inf_as_null(True)
+ cf.set_option("mode.use_inf_as_null",True)
assert not notnull(np.inf)
assert not notnull(-np.inf)
- use_inf_as_null(False)
+ cf.set_option("mode.use_inf_as_null",False)
float_series = Series(np.random.randn(5))
obj_series = Series(np.random.randn(5), dtype=object)
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 814281dfc25e9..828bc85df4289 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -282,6 +282,31 @@ def test_config_prefix(self):
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b'), 2)
+ def test_callback(self):
+ k=[None]
+ v=[None]
+ def callback(key):
+ k.append(key)
+ v.append(self.cf.get_option(key))
+
+ self.cf.register_option('d.a', 'foo',cb=callback)
+ self.cf.register_option('d.b', 'foo',cb=callback)
+
+ del k[-1],v[-1]
+ self.cf.set_option("d.a","fooz")
+ self.assertEqual(k[-1],"d.a")
+ self.assertEqual(v[-1],"fooz")
+
+ del k[-1],v[-1]
+ self.cf.set_option("d.b","boo")
+ self.assertEqual(k[-1],"d.b")
+ self.assertEqual(v[-1],"boo")
+
+ del k[-1],v[-1]
+ self.cf.reset_option("d.b")
+ self.assertEqual(k[-1],"d.b")
+
+
# fmt.reset_printoptions and fmt.set_printoptions were altered
# to use core.config, test_format exercises those paths.
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 5082aa6c182f9..819cd974aaf04 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -112,11 +112,11 @@ def test_repr_should_return_str (self):
self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
def test_repr_no_backslash(self):
- pd.set_option('test.interactive', True)
+ pd.set_option('mode.sim_interactive', True)
df = DataFrame(np.random.randn(10, 4))
self.assertTrue('\\' not in repr(df))
- pd.reset_option('test.interactive')
+ pd.reset_option('mode.sim_interactive')
def test_to_string_repr_unicode(self):
buf = StringIO()
@@ -409,7 +409,7 @@ def test_frame_info_encoding(self):
fmt.set_printoptions(max_rows=200)
def test_wide_repr(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
df = DataFrame([col(20, 25) for _ in range(10)])
set_option('print.expand_frame_repr', False)
@@ -423,11 +423,11 @@ def test_wide_repr(self):
self.assert_(len(wider_repr) < len(wide_repr))
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_named(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
df = DataFrame([col(20, 25) for _ in range(10)])
df.index.name = 'DataFrame Index'
@@ -446,11 +446,11 @@ def test_wide_repr_named(self):
self.assert_('DataFrame Index' in line)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_multiindex(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
np.array(col(10, 5))])
@@ -471,11 +471,11 @@ def test_wide_repr_multiindex(self):
self.assert_('Level 0 Level 1' in line)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_multiindex_cols(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
np.array(col(10, 5))])
@@ -497,11 +497,11 @@ def test_wide_repr_multiindex_cols(self):
self.assert_(len(wide_repr.splitlines()) == 14 * 10 - 1)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_unicode(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.randu(k) for _ in xrange(l)]
df = DataFrame([col(20, 25) for _ in range(10)])
set_option('print.expand_frame_repr', False)
@@ -515,18 +515,18 @@ def test_wide_repr_unicode(self):
self.assert_(len(wider_repr) < len(wide_repr))
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_wide_long_columns(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
df = DataFrame({'a': ['a'*30, 'b'*30], 'b': ['c'*70, 'd'*80]})
result = repr(df)
self.assertTrue('ccccc' in result)
self.assertTrue('ddddd' in result)
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
def test_to_string(self):
from pandas import read_table
diff --git a/scripts/touchup_gh_issues.py b/scripts/touchup_gh_issues.py
new file mode 100755
index 0000000000000..924a236f629f6
--- /dev/null
+++ b/scripts/touchup_gh_issues.py
@@ -0,0 +1,42 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from __future__ import print_function
+from collections import OrderedDict
+import sys
+import re
+
+"""
+Reads in stdin, replace all occurences of '#num' or 'GH #num' with
+links to github issue. dumps the issue anchors before the next
+section header
+"""
+
+pat = "((?:\s*GH\s*)?)#(\d{3,4})([^_]|$)?"
+rep_pat = r"\1GH\2_\3"
+anchor_pat =".. _GH{id}: https://github.com/pydata/pandas/issues/{id}"
+section_pat = "^pandas\s[\d\.]+\s*$"
+def main():
+ issues = OrderedDict()
+ while True:
+
+ line = sys.stdin.readline()
+ if not line:
+ break
+
+ if re.search(section_pat,line):
+ for id in issues:
+ print(anchor_pat.format(id=id).rstrip())
+ if issues:
+ print("\n")
+ issues = OrderedDict()
+
+ for m in re.finditer(pat,line):
+ id = m.group(2)
+ if id not in issues:
+ issues[id] = True
+ print(re.sub(pat, rep_pat,line).rstrip())
+ pass
+
+if __name__ == "__main__":
+ main()
| rebased on top of #2504, merge that first.
see here: https://github.com/y-p/pandas/blob/prettify_release/RELEASE.rst
related #842.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2505 | 2012-12-12T14:07:57Z | 2012-12-13T14:52:22Z | null | 2012-12-13T14:52:22Z |
option callback on value change, use an option to control use_inf_as_null | diff --git a/RELEASE.rst b/RELEASE.rst
index 26f3265029696..890425b729324 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -99,7 +99,7 @@ pandas 0.10.0
instead. This is a legacy hack and can lead to subtle bugs.
- inf/-inf are no longer considered as NA by isnull/notnull. To be clear, this
is legacy cruft from early pandas. This behavior can be globally re-enabled
- using pandas.core.common.use_inf_as_na (#2050, #1919)
+ using the new option ``mode.use_inf_as_null`` (#2050, #1919)
- ``pandas.merge`` will now default to ``sort=False``. For many use cases
sorting the join keys is not necessary, and doing it by default is wasteful
- ``names`` handling in file parsing: if explicit column `names` passed,
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index f514139a9170f..b8f3468f82098 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -61,7 +61,7 @@ arise and we wish to also consider that "missing" or "null".
Until recently, for legacy reasons ``inf`` and ``-inf`` were also
considered to be "null" in computations. This is no longer the case by
-default; use the :func: `~pandas.core.common.use_inf_as_null` function to recover it.
+default; use the ``mode.use_inf_as_null`` option to recover it.
.. _missing.isnull:
diff --git a/pandas/core/common.py b/pandas/core/common.py
index d77d413d39571..386343dc38704 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -94,8 +94,8 @@ def isnull_old(obj):
else:
return obj is None
-def use_inf_as_null(flag):
- '''
+def _use_inf_as_null(key):
+ '''Option change callback for null/inf behaviour
Choose which replacement for numpy.isnan / -numpy.isfinite is used.
Parameters
@@ -113,6 +113,7 @@ def use_inf_as_null(flag):
* http://stackoverflow.com/questions/4859217/
programmatically-creating-variables-in-python/4859312#4859312
'''
+ flag = get_option(key)
if flag == True:
globals()['isnull'] = isnull_old
else:
@@ -1179,7 +1180,7 @@ def in_interactive_session():
returns True if running under python/ipython interactive shell
"""
import __main__ as main
- return not hasattr(main, '__file__') or get_option('test.interactive')
+ return not hasattr(main, '__file__') or get_option('mode.sim_interactive')
def in_qtconsole():
"""
diff --git a/pandas/core/config.py b/pandas/core/config.py
index 9bd62c38fa8e5..5a2e6ef093f30 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -25,7 +25,9 @@
- all options in a certain sub - namespace can be reset at once.
- the user can set / get / reset or ask for the description of an option.
- a developer can register and mark an option as deprecated.
-
+- you can register a callback to be invoked when the the option value
+ is set or reset. Changing the stored value is considered misuse, but
+ is not verboten.
Implementation
==============
@@ -54,7 +56,7 @@
import warnings
DeprecatedOption = namedtuple('DeprecatedOption', 'key msg rkey removal_ver')
-RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator')
+RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator cb')
_deprecated_options = {} # holds deprecated option metdata
_registered_options = {} # holds registered option metdata
@@ -105,6 +107,9 @@ def _set_option(pat, value):
root, k = _get_root(key)
root[k] = value
+ if o and o.cb:
+ o.cb(key)
+
def _describe_option(pat='', _print_desc=True):
@@ -270,7 +275,7 @@ def __doc__(self):
######################################################
# Functions for use by pandas developers, in addition to User - api
-def register_option(key, defval, doc='', validator=None):
+def register_option(key, defval, doc='', validator=None, cb=None):
"""Register an option in the package-wide pandas config object
Parameters
@@ -280,6 +285,9 @@ def register_option(key, defval, doc='', validator=None):
doc - a string description of the option
validator - a function of a single argument, should raise `ValueError` if
called with a value which is not a legal value for the option.
+ cb - a function of a single argument "key", which is called
+ immediately after an option value is set/reset. key is
+ the full name of the option.
Returns
-------
@@ -321,7 +329,7 @@ def register_option(key, defval, doc='', validator=None):
# save the option metadata
_registered_options[key] = RegisteredOption(key=key, defval=defval,
- doc=doc, validator=validator)
+ doc=doc, validator=validator,cb=cb)
def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index a854c012226cf..b2443e5b0b14e 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -139,10 +139,27 @@
cf.register_option('expand_frame_repr', True, pc_expand_repr_doc)
cf.register_option('line_width', 80, pc_line_width_doc)
-tc_interactive_doc="""
+tc_sim_interactive_doc="""
: boolean
Default False
Whether to simulate interactive mode for purposes of testing
"""
-with cf.config_prefix('test'):
- cf.register_option('interactive', False, tc_interactive_doc)
+with cf.config_prefix('mode'):
+ cf.register_option('sim_interactive', False, tc_sim_interactive_doc)
+
+use_inf_as_null_doc="""
+: boolean
+ True means treat None, NaN, INF, -INF as null (old way),
+ False means None and NaN are null, but INF, -INF are not null
+ (new way).
+"""
+
+# we don't want to start importing evrything at the global context level
+# or we'll hit circular deps.
+def use_inf_as_null_cb(key):
+ from pandas.core.common import _use_inf_as_null
+ _use_inf_as_null(key)
+
+with cf.config_prefix('mode'):
+ cf.register_option('use_inf_as_null', False, use_inf_as_null_doc,
+ cb=use_inf_as_null_cb)
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index e8be2a5cc9c4c..1646d71164834 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -5,9 +5,10 @@
import unittest
from pandas import Series, DataFrame, date_range, DatetimeIndex
-from pandas.core.common import notnull, isnull, use_inf_as_null
+from pandas.core.common import notnull, isnull
import pandas.core.common as com
import pandas.util.testing as tm
+import pandas.core.config as cf
import numpy as np
@@ -29,15 +30,15 @@ def test_notnull():
assert not notnull(None)
assert not notnull(np.NaN)
- use_inf_as_null(False)
+ cf.set_option("mode.use_inf_as_null",False)
assert notnull(np.inf)
assert notnull(-np.inf)
- use_inf_as_null(True)
+ cf.set_option("mode.use_inf_as_null",True)
assert not notnull(np.inf)
assert not notnull(-np.inf)
- use_inf_as_null(False)
+ cf.set_option("mode.use_inf_as_null",False)
float_series = Series(np.random.randn(5))
obj_series = Series(np.random.randn(5), dtype=object)
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 814281dfc25e9..828bc85df4289 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -282,6 +282,31 @@ def test_config_prefix(self):
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b'), 2)
+ def test_callback(self):
+ k=[None]
+ v=[None]
+ def callback(key):
+ k.append(key)
+ v.append(self.cf.get_option(key))
+
+ self.cf.register_option('d.a', 'foo',cb=callback)
+ self.cf.register_option('d.b', 'foo',cb=callback)
+
+ del k[-1],v[-1]
+ self.cf.set_option("d.a","fooz")
+ self.assertEqual(k[-1],"d.a")
+ self.assertEqual(v[-1],"fooz")
+
+ del k[-1],v[-1]
+ self.cf.set_option("d.b","boo")
+ self.assertEqual(k[-1],"d.b")
+ self.assertEqual(v[-1],"boo")
+
+ del k[-1],v[-1]
+ self.cf.reset_option("d.b")
+ self.assertEqual(k[-1],"d.b")
+
+
# fmt.reset_printoptions and fmt.set_printoptions were altered
# to use core.config, test_format exercises those paths.
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 8b4866a643dde..7c93ece23237b 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -112,11 +112,11 @@ def test_repr_should_return_str (self):
self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
def test_repr_no_backslash(self):
- pd.set_option('test.interactive', True)
+ pd.set_option('mode.sim_interactive', True)
df = DataFrame(np.random.randn(10, 4))
self.assertTrue('\\' not in repr(df))
- pd.reset_option('test.interactive')
+ pd.reset_option('mode.sim_interactive')
def test_to_string_repr_unicode(self):
buf = StringIO()
@@ -409,7 +409,7 @@ def test_frame_info_encoding(self):
fmt.set_printoptions(max_rows=200)
def test_wide_repr(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
df = DataFrame([col(20, 25) for _ in range(10)])
set_option('print.expand_frame_repr', False)
@@ -423,19 +423,19 @@ def test_wide_repr(self):
self.assert_(len(wider_repr) < len(wide_repr))
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_wide_columns(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
df = DataFrame(randn(5, 3), columns=['a' * 90, 'b' * 90, 'c' * 90])
rep_str = repr(df)
self.assert_(len(rep_str.splitlines()) == 20)
- reset_option('test.interactive')
+ reset_option('mode.sim_interactive')
def test_wide_repr_named(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
df = DataFrame([col(20, 25) for _ in range(10)])
df.index.name = 'DataFrame Index'
@@ -454,11 +454,11 @@ def test_wide_repr_named(self):
self.assert_('DataFrame Index' in line)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_multiindex(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
np.array(col(10, 5))])
@@ -479,11 +479,11 @@ def test_wide_repr_multiindex(self):
self.assert_('Level 0 Level 1' in line)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_multiindex_cols(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
np.array(col(10, 5))])
@@ -505,11 +505,11 @@ def test_wide_repr_multiindex_cols(self):
self.assert_(len(wide_repr.splitlines()) == 14 * 10 - 1)
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_unicode(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
col = lambda l, k: [tm.randu(k) for _ in xrange(l)]
df = DataFrame([col(20, 25) for _ in range(10)])
set_option('print.expand_frame_repr', False)
@@ -523,18 +523,18 @@ def test_wide_repr_unicode(self):
self.assert_(len(wider_repr) < len(wide_repr))
reset_option('print.expand_frame_repr')
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
set_option('print.line_width', 80)
def test_wide_repr_wide_long_columns(self):
- set_option('test.interactive', True)
+ set_option('mode.sim_interactive', True)
df = DataFrame({'a': ['a'*30, 'b'*30], 'b': ['c'*70, 'd'*80]})
result = repr(df)
self.assertTrue('ccccc' in result)
self.assertTrue('ddddd' in result)
- set_option('test.interactive', False)
+ set_option('mode.sim_interactive', False)
def test_to_string(self):
from pandas import read_table
| I'm not married to the "mode.X" option key, but be aware that the dynamic generated
docstrings for the options API favour a small number of top level keys with multiple keys
under them and so i consolidated `test.interactive` into `mode.sim_interactive`.
Some sane taxonomy needs to be established early on to keep things tidy,
Be an option-naming fascist I say.
http://martinfowler.com/bliki/TwoHardThings.html
related: #1919
| https://api.github.com/repos/pandas-dev/pandas/pulls/2504 | 2012-12-12T11:36:20Z | 2012-12-13T14:50:39Z | 2012-12-13T14:50:39Z | 2014-07-08T14:50:42Z |
BLD: travis deps (stats,timeseries, pytables on py2.x), clean setup.py, nose tricks | diff --git a/.gitignore b/.gitignore
index fbf28ba50f2c5..b388d94ba9052 100644
--- a/.gitignore
+++ b/.gitignore
@@ -26,4 +26,5 @@ pandas.egg-info
.tox
pandas/io/*.dat
pandas/io/*.json
-*.log
\ No newline at end of file
+*.log
+.noseids
diff --git a/.travis.yml b/.travis.yml
index 8d87780e83e50..ab36df340e1c5 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -45,10 +45,8 @@ before_install:
install:
- echo "Waldo2"
- ci/install.sh
+ - ci/print_versions.py # not including stats
script:
- echo "Waldo3"
- ci/script.sh
-
-after_script:
- - ci/print_versions.py # not including stats
diff --git a/ci/install.sh b/ci/install.sh
index 139382960b81a..f2894ceeae2a0 100755
--- a/ci/install.sh
+++ b/ci/install.sh
@@ -37,10 +37,26 @@ if [ x"$FULL_DEPS" == x"true" ]; then
sudo apt-get $APT_ARGS install python3-scipy;
fi
+ if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ]; then
+ pip install tables
+ fi
+
pip install $PIP_ARGS --use-mirrors openpyxl pytz matplotlib;
pip install $PIP_ARGS --use-mirrors xlrd xlwt;
+ pip install $PIP_ARGS 'http://downloads.sourceforge.net/project/pytseries/scikits.timeseries/0.91.3/scikits.timeseries-0.91.3.tar.gz?r='
fi
if [ x"$VBENCH" == x"true" ]; then
pip $PIP_ARGS install sqlalchemy git+git://github.com/pydata/vbench.git;
fi
+
+#build and install pandas
+python setup.py build_ext install
+
+#HACK: pandas is a statsmodels dependency
+# so we need to install it after pandas
+if [ x"$FULL_DEPS" == x"true" ]; then
+ pip install patsy
+ # pick recent 0.5dev dec/2012
+ pip install git+git://github.com/statsmodels/statsmodels@c9062e43b8a5f7385537ca95#egg=statsmod
+fi;
diff --git a/ci/print_versions.py b/ci/print_versions.py
index d942e12f619ff..283c65aa5eeb3 100755
--- a/ci/print_versions.py
+++ b/ci/print_versions.py
@@ -29,6 +29,12 @@
except:
print("statsmodels: Not installed")
+try:
+ import scikits.timeseries as ts
+ print("scikits.timeseries: %s" % ts.__version__)
+except:
+ print("scikits.timeseries: Not installed")
+
try:
import dateutil
print("dateutil: %s" % dateutil.__version__)
diff --git a/ci/script.sh b/ci/script.sh
index e3c22e56fd054..28be485de7ac8 100755
--- a/ci/script.sh
+++ b/ci/script.sh
@@ -2,17 +2,6 @@
echo "inside $0"
-python setup.py build_ext install
-
-# HACK: pandas is a statsmodels dependency
-# and pandas has a reverse dep (for tests only)
-# so we need to install it after pandas
-if [ x"$FULL_DEPS" == x"true" ]; then
- # we need at least 0.5.0-dev, just pick a commit for now
- pip install git+git://github.com/statsmodels/statsmodels@c9062e43b8a5f7385537ca95#egg=statsmod
-
-fi;
-
if [ x"$VBENCH" != x"true" ]; then
nosetests --exe -w /tmp -A "not slow" pandas;
exit
diff --git a/setup.py b/setup.py
index d76b06bc05b64..ca948209567b2 100755
--- a/setup.py
+++ b/setup.py
@@ -68,21 +68,13 @@
"\n$ pip install distribute")
else:
- if sys.version_info[1] == 5:
- # dateutil >= 2.1 doesn't work on Python 2.5
- setuptools_kwargs = {
- 'install_requires': ['python-dateutil < 2',
- 'pytz',
- 'numpy >= 1.6'],
- 'zip_safe' : False,
- }
- else:
- setuptools_kwargs = {
- 'install_requires': ['python-dateutil',
- 'pytz',
- 'numpy >= 1.6'],
- 'zip_safe' : False,
- }
+ setuptools_kwargs = {
+ 'install_requires': ['python-dateutil',
+ 'pytz',
+ 'numpy >= 1.6.1'],
+ 'zip_safe' : False,
+ }
+
if not _have_setuptools:
try:
import numpy
@@ -101,7 +93,7 @@
sys.exit(nonumpy_msg)
if np.__version__ < '1.6.1':
- msg = "pandas requires NumPy >= 1.6 due to datetime64 dependency"
+ msg = "pandas requires NumPy >= 1.6.1 due to datetime64 dependency"
sys.exit(msg)
from distutils.extension import Extension
diff --git a/test_fast.sh b/test_fast.sh
index 1dc164429f395..2ae736e952839 100755
--- a/test_fast.sh
+++ b/test_fast.sh
@@ -1 +1 @@
-nosetests -A "not slow" pandas $*
+nosetests -A "not slow" pandas --with-id $*
| note that this brings up some currently failing cmov tests (@changhiskhan) which
were previously skipped due to no scikit.timeseries on travis.
I picked the current statsmodels 0.5 git master commit hash as the
version to be installed, since pypi 0.5.0-beta raises a failing test in WLS
see http://nose.readthedocs.org/en/latest/plugins/testid.html
for nose-fu tidbit.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2500 | 2012-12-12T07:55:27Z | 2012-12-12T16:41:21Z | 2012-12-12T16:41:21Z | 2012-12-13T05:50:51Z |
CLN: remove new has_index_names argument from ExcelFile.parse() before rls | diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index dd46d5758eb1b..f27ca2fe5beea 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1805,7 +1805,7 @@ def __repr__(self):
return object.__repr__(self)
def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
- index_col=None, has_index_names=False, parse_cols=None, parse_dates=False,
+ index_col=None, parse_cols=None, parse_dates=False,
date_parser=None, na_values=None, thousands=None, chunksize=None,
**kwds):
"""
@@ -1824,9 +1824,6 @@ def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
index_col : int, default None
Column to use as the row labels of the DataFrame. Pass None if
there is no such column
- has_index_names: boolean, default False
- True if the cols defined in index_col have an index name and are
- not in the header
parse_cols : int or list, default None
If None then parse all columns,
If int then indicates last column to be parsed
@@ -1840,6 +1837,12 @@ def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
-------
parsed : DataFrame
"""
+
+ # has_index_names: boolean, default False
+ # True if the cols defined in index_col have an index name and are
+ # not in the header
+ has_index_names=False # removed as new argument of API function
+
skipfooter = kwds.pop('skipfooter', None)
if skipfooter is not None:
skip_footer = skipfooter
| better not introduce API changes just yet
| https://api.github.com/repos/pandas-dev/pandas/pulls/2498 | 2012-12-12T06:37:10Z | 2012-12-13T14:54:16Z | null | 2014-07-14T09:43:09Z |
ENH: ndim tables in HDFStore (allow indexables to be passed in) | diff --git a/doc/source/io.rst b/doc/source/io.rst
index bbf473628cafb..2c5dac3931c0d 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1001,7 +1001,7 @@ Objects can be written to the file just like adding key-value pairs to a dict:
store['wp'] = wp
# the type of stored data
- store.handle.root.wp._v_attrs.pandas_type
+ store.root.wp._v_attrs.pandas_type
store
@@ -1037,8 +1037,7 @@ Storing in Table format
``HDFStore`` supports another ``PyTables`` format on disk, the ``table`` format. Conceptually a ``table`` is shaped
very much like a DataFrame, with rows and columns. A ``table`` may be appended to in the same or other sessions.
-In addition, delete & query type operations are supported. You can create an index with ``create_table_index``
-after data is already in the table (this may become automatic in the future or an option on appending/putting a ``table``).
+In addition, delete & query type operations are supported.
.. ipython:: python
:suppress:
@@ -1061,11 +1060,7 @@ after data is already in the table (this may become automatic in the future or a
store.select('df')
# the type of stored data
- store.handle.root.df._v_attrs.pandas_type
-
- # create an index
- store.create_table_index('df')
- store.handle.root.df.table
+ store.root.df._v_attrs.pandas_type
Hierarchical Keys
~~~~~~~~~~~~~~~~~
@@ -1090,7 +1085,7 @@ Storing Mixed Types in a Table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Storing mixed-dtype data is supported. Strings are store as a fixed-width using the maximum size of the appended column. Subsequent appends will truncate strings at this length.
-Passing ``min_itemsize = { column_name : size }`` as a paremeter to append will set a larger minimum for the column. Storing ``floats, strings, ints, bools`` are currently supported.
+Passing ``min_itemsize = { `values` : size }`` as a parameter to append will set a larger minimum for the string columns. Storing ``floats, strings, ints, bools`` are currently supported.
.. ipython:: python
@@ -1099,11 +1094,14 @@ Passing ``min_itemsize = { column_name : size }`` as a paremeter to append will
df_mixed['int'] = 1
df_mixed['bool'] = True
- store.append('df_mixed',df_mixed)
+ store.append('df_mixed', df_mixed, min_itemsize = { 'values' : 50 })
df_mixed1 = store.select('df_mixed')
df_mixed1
df_mixed1.get_dtype_counts()
+ # we have provided a minimum string column size
+ store.root.df_mixed.table
+
Querying a Table
~~~~~~~~~~~~~~~~
@@ -1135,41 +1133,63 @@ Queries are built up using a list of ``Terms`` (currently only **anding** of ter
store
store.select('wp',[ 'major_axis>20000102', ('minor_axis', '=', ['A','B']) ])
+Indexing
+~~~~~~~~
+You can create an index for a table with ``create_table_index`` after data is already in the table (after and ``append/put`` operation). Creating a table index is **highly** encouraged. This will speed your queries a great deal when you use a ``select`` with the indexed dimension as the ``where``. It is not automagically done now because you may want to index different axes than the default (except in the case of a DataFrame, where it almost always makes sense to index the ``index``.
+
+.. ipython:: python
+
+ # create an index
+ store.create_table_index('df')
+ i = store.root.df.table.cols.index.index
+ i.optlevel, i.kind
+
+ # change an index by passing new parameters
+ store.create_table_index('df', optlevel = 9, kind = 'full')
+ i = store.root.df.table.cols.index.index
+ i.optlevel, i.kind
+
+
Delete from a Table
~~~~~~~~~~~~~~~~~~~
.. ipython:: python
+ # returns the number of rows deleted
store.remove('wp', 'major_axis>20000102' )
store.select('wp')
Notes & Caveats
~~~~~~~~~~~~~~~
- - Selection by items (the top level panel dimension) is not possible; you always get all of the items in the returned Panel
- Once a ``table`` is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can be appended
- You can not append/select/delete to a non-table (table creation is determined on the first append, or by passing ``table=True`` in a put operation)
- - ``PyTables`` only supports fixed-width string columns in ``tables``. The sizes of a string based indexing column (e.g. *column* or *minor_axis*) are determined as the maximum size of the elements in that axis or by passing the parameter ``min_itemsize`` on the first table creation (``min_itemsize`` can be an integer or a dict of column name to an integer). If subsequent appends introduce elements in the indexing axis that are larger than the supported indexer, an Exception will be raised (otherwise you could have a silent truncation of these indexers, leading to loss of information). This is **ONLY** necessary for storing ``Panels`` (as the indexing column is stored directly in a column)
+ - ``HDFStore`` is **not-threadsafe for writing**. The underlying ``PyTables`` only supports concurrent reads (via threading or processes). If you need reading and writing *at the same time*, you need to serialize these operations in a single thread in a single process. You will corrupt your data otherwise. See the issue <https://github.com/pydata/pandas/issues/2397> for more information.
+
+ - ``PyTables`` only supports fixed-width string columns in ``tables``. The sizes of a string based indexing column (e.g. *columns* or *minor_axis*) are determined as the maximum size of the elements in that axis or by passing the parameter ``min_itemsize`` on the first table creation (``min_itemsize`` can be an integer or a dict of column name to an integer). If subsequent appends introduce elements in the indexing axis that are larger than the supported indexer, an Exception will be raised (otherwise you could have a silent truncation of these indexers, leading to loss of information). Just to be clear, this fixed-width restriction applies to **indexables** (the indexing columns) and **string values** in a mixed_type table.
.. ipython:: python
- store.append('wp_big_strings', wp, min_itemsize = 30)
+ store.append('wp_big_strings', wp, min_itemsize = { 'minor_axis' : 30 })
wp = wp.rename_axis(lambda x: x + '_big_strings', axis=2)
store.append('wp_big_strings', wp)
store.select('wp_big_strings')
+ # we have provided a minimum minor_axis indexable size
+ store.root.wp_big_strings.table
+
Compatibility
~~~~~~~~~~~~~
0.10 of ``HDFStore`` is backwards compatible for reading tables created in a prior version of pandas,
-however, query terms using the prior (undocumented) methodology are unsupported. You must read in the entire
-file and write it out using the new format to take advantage of the updates.
+however, query terms using the prior (undocumented) methodology are unsupported. ``HDFStore`` will issue a warning if you try to use a prior-version format file. You must read in the entire
+file and write it out using the new format to take advantage of the updates. The group attribute ``pandas_version`` contains the version information.
Performance
~~~~~~~~~~~
- - ``Tables`` come with a performance penalty as compared to regular stores. The benefit is the ability to append/delete and query (potentially very large amounts of data).
+ - ``Tables`` come with a writing performance penalty as compared to regular stores. The benefit is the ability to append/delete and query (potentially very large amounts of data).
Write times are generally longer as compared with regular stores. Query times can be quite fast, especially on an indexed axis.
- ``Tables`` can (as of 0.10.0) be expressed as different types.
@@ -1177,12 +1197,31 @@ Performance
- ``WORMTable`` (pending implementation) - is available to faciliate very fast writing of tables that are also queryable (but CANNOT support appends)
- To delete a lot of data, it is sometimes better to erase the table and rewrite it. ``PyTables`` tends to increase the file size with deletions
- - In general it is best to store Panels with the most frequently selected dimension in the minor axis and a time/date like dimension in the major axis, but this is not required. Panels can have any major_axis and minor_axis type that is a valid Panel indexer.
- - No dimensions are currently indexed automagically (in the ``PyTables`` sense); these require an explict call to ``create_table_index``
- ``Tables`` offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
use the pytables utilities ``ptrepack`` to rewrite the file (and also can change compression methods)
- Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
+Experimental
+~~~~~~~~~~~~
+
+HDFStore supports ``Panel4D`` storage.
+
+.. ipython:: python
+
+ p4d = Panel4D({ 'l1' : wp })
+ p4d
+ store.append('p4d', p4d)
+ store
+
+These, by default, index the three axes ``items, major_axis, minor_axis``. On an ``AppendableTable`` it is possible to setup with the first append a different indexing scheme, depending on how you want to store your data. Pass the ``axes`` keyword with a list of dimension (currently must by exactly 1 less than the total dimensions of the object). This cannot be changed after table creation.
+
+.. ipython:: python
+
+ from pandas.io.pytables import Term
+ store.append('p4d2', p4d, axes = ['labels','major_axis','minor_axis'])
+ store
+ store.select('p4d2', [ Term('labels=l1'), Term('items=Item1'), Term('minor_axis=A_big_strings') ])
+
.. ipython:: python
:suppress:
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 86627563854b3..91bd27ff510ef 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -10,6 +10,7 @@
import re
import copy
import itertools
+import warnings
import numpy as np
from pandas import (
@@ -20,7 +21,7 @@
from pandas.tseries.api import PeriodIndex, DatetimeIndex
from pandas.core.common import adjoin
from pandas.core.algorithms import match, unique
-
+from pandas.core.strings import str_len
from pandas.core.categorical import Factor
from pandas.core.common import _asarray_tuplesafe, _try_sort
from pandas.core.internals import BlockManager, make_block, form_blocks
@@ -34,6 +35,11 @@
from contextlib import contextmanager
+# versioning attribute
+_version = '0.10'
+
+class IncompatibilityWarning(Warning): pass
+
# reading and writing the full object in one go
_TYPE_MAP = {
Series: 'series',
@@ -85,9 +91,12 @@ def _tables():
_table_mod = tables
# version requirements
- major, minor, subv = tables.__version__.split('.')
- if int(major) >= 2 and int(minor[0]) >= 3:
- _table_supports_index = True
+ ver = tables.__version__.split('.')
+ try:
+ if int(ver[0]) >= 2 and int(ver[1][0]) >= 3:
+ _table_supports_index = True
+ except:
+ pass
return _table_mod
@@ -327,7 +336,7 @@ def get(self, key):
raise KeyError('No object named %s in the file' % key)
return self._read_group(group)
- def select(self, key, where=None):
+ def select(self, key, where=None, **kwargs):
"""
Retrieve pandas object stored in file, optionally based on where
criteria
@@ -341,9 +350,7 @@ def select(self, key, where=None):
group = self.get_node(key)
if group is None:
raise KeyError('No object named %s in the file' % key)
- if where is not None and not _is_table_type(group):
- raise Exception('can only select with where on objects written as tables')
- return self._read_group(group, where)
+ return self._read_group(group, where, **kwargs)
def put(self, key, value, table=False, append=False,
compression=None, **kwargs):
@@ -390,7 +397,7 @@ def remove(self, key, where=None):
if group is not None:
# remove the node
- if where is None or not len(where):
+ if where is None:
group = self.get_node(key)
group._f_remove(recursive=True)
@@ -433,8 +440,9 @@ def create_table_index(self, key, **kwargs):
"""
# version requirements
+ _tables()
if not _table_supports_index:
- raise("PyTables >= 2.3 is required for table indexing")
+ raise Exception("PyTables >= 2.3 is required for table indexing")
group = self.get_node(key)
if group is None: return
@@ -498,6 +506,8 @@ def _write_to_group(self, key, value, table=False, append=False,
wrapper(value)
group._v_attrs.pandas_type = kind
+ group._v_attrs.pandas_version = _version
+ #group._v_attrs.meta = getattr(value,'meta',None)
def _write_series(self, group, series):
self._write_index(group, 'index', series.index)
@@ -617,10 +627,6 @@ def _read_block_manager(self, group):
return BlockManager(blocks, axes)
- def _write_frame_table(self, group, df, append=False, comp=None, **kwargs):
- t = create_table(self, group, typ = 'appendable_frame')
- t.write(axes_to_index=[0], obj=df, append=append, compression=comp, **kwargs)
-
def _write_wide(self, group, panel):
panel._consolidate_inplace()
self._write_block_manager(group, panel._data)
@@ -628,23 +634,33 @@ def _write_wide(self, group, panel):
def _read_wide(self, group, where=None):
return Panel(self._read_block_manager(group))
- def _write_wide_table(self, group, panel, append=False, comp=None, **kwargs):
- t = create_table(self, group, typ = 'appendable_panel')
- t.write(axes_to_index=[1,2], obj=panel,
- append=append, compression=comp, **kwargs)
-
- def _write_ndim_table(self, group, obj, append=False, comp=None, axes_to_index=None, **kwargs):
- if axes_to_index is None:
- axes_to_index=[1,2,3]
+ def _write_ndim_table(self, group, obj, append=False, comp=None, axes=None, **kwargs):
+ if axes is None:
+ axes = [1,2,3]
t = create_table(self, group, typ = 'appendable_ndim')
- t.write(axes_to_index=axes_to_index, obj=obj,
+ t.write(axes=axes, obj=obj,
append=append, compression=comp, **kwargs)
- def _read_wide_table(self, group, where=None):
- t = create_table(self, group)
+ def _read_ndim_table(self, group, where=None, **kwargs):
+ t = create_table(self, group, **kwargs)
return t.read(where)
- _read_ndim_table = _read_wide_table
+ def _write_frame_table(self, group, df, append=False, comp=None, axes=None, **kwargs):
+ if axes is None:
+ axes = [0]
+ t = create_table(self, group, typ = 'appendable_frame')
+ t.write(axes=axes, obj=df, append=append, compression=comp, **kwargs)
+
+ _read_frame_table = _read_ndim_table
+
+ def _write_wide_table(self, group, panel, append=False, comp=None, axes=None, **kwargs):
+ if axes is None:
+ axes = [1,2]
+ t = create_table(self, group, typ = 'appendable_panel')
+ t.write(axes=axes, obj=panel,
+ append=append, compression=comp, **kwargs)
+
+ _read_wide_table = _read_ndim_table
def _write_index(self, group, key, index):
if isinstance(index, MultiIndex):
@@ -827,11 +843,16 @@ def _write_array(self, group, key, value):
getattr(group, key)._v_attrs.transposed = transposed
- def _read_group(self, group, where=None):
+ def _read_group(self, group, where=None, **kwargs):
kind = group._v_attrs.pandas_type
kind = _LEGACY_MAP.get(kind, kind)
handler = self._get_handler(op='read', kind=kind)
- return handler(group, where)
+ v = handler(group, where, **kwargs)
+ #if v is not None:
+ # meta = getattr(group._v_attrs,'meta',None)
+ # if meta is not None:
+ # v.meta = meta
+ return v
def _read_series(self, group, where=None):
index = self._read_index(group, 'index')
@@ -860,11 +881,6 @@ def _read_index_legacy(self, group, key):
kind = node._v_attrs.kind
return _unconvert_index_legacy(data, kind)
- def _read_frame_table(self, group, where=None):
- t = create_table(self, group)
- return t.read(where)
-
-
class IndexCol(object):
""" an index column description class
@@ -928,6 +944,10 @@ def __repr__(self):
__str__ = __repr__
+ def __eq__(self, other):
+ """ compare 2 col items """
+ return all([ getattr(self,a,None) == getattr(other,a,None) for a in ['name','cname','axis','pos'] ])
+
def copy(self):
new_self = copy.copy(self)
return new_self
@@ -981,16 +1001,22 @@ def validate_and_set(self, table, append, **kwargs):
self.validate_attr(append)
self.set_attr()
- def validate_col(self):
- """ validate this column & set table data for it """
+ def validate_col(self, itemsize = None):
+ """ validate this column: return the compared against itemsize """
# validate this column for string truncation (or reset to the max size)
- if self.kind == 'string':
+ dtype = getattr(self,'dtype',None)
+ if self.kind == 'string' or (dtype is not None and dtype.startswith('string')):
c = self.col
if c is not None:
- if c.itemsize < self.itemsize:
- raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!" % (self.cname,self.itemsize,c.itemsize))
+ if itemsize is None:
+ itemsize = self.itemsize
+ if c.itemsize < itemsize:
+ raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!" % (self.cname,itemsize,c.itemsize))
+ return c.itemsize
+
+ return None
def validate_attr(self, append):
@@ -1034,12 +1060,21 @@ def __init__(self, values = None, kind = None, typ = None, cname = None, data =
def __repr__(self):
return "name->%s,cname->%s,dtype->%s,shape->%s" % (self.name,self.cname,self.dtype,self.shape)
+ def __eq__(self, other):
+ """ compare 2 col items """
+ return all([ getattr(self,a,None) == getattr(other,a,None) for a in ['name','cname','dtype','pos'] ])
+
def set_data(self, data):
self.data = data
if data is not None:
if self.dtype is None:
self.dtype = data.dtype.name
+ def take_data(self):
+ """ return the data & release the memory """
+ self.data, data = None, self.data
+ return data
+
@property
def shape(self):
return getattr(self.data,'shape',None)
@@ -1111,9 +1146,10 @@ class Table(object):
obj_type = None
ndim = None
- def __init__(self, parent, group):
+ def __init__(self, parent, group, **kwargs):
self.parent = parent
self.group = group
+ self.version = getattr(group._v_attrs,'pandas_version',None)
self.index_axes = []
self.non_index_axes = []
self.values_axes = []
@@ -1129,10 +1165,25 @@ def pandas_type(self):
def __repr__(self):
""" return a pretty representatgion of myself """
- return "%s (typ->%s,nrows->%s)" % (self.pandas_type,self.table_type_short,self.nrows)
+ self.infer_axes()
+ return "%s (typ->%s,nrows->%s,indexers->[%s])" % (self.pandas_type,self.table_type_short,self.nrows,','.join([ a.name for a in self.index_axes ]))
__str__ = __repr__
+ def copy(self):
+ new_self = copy.copy(self)
+ return new_self
+
+ def __eq__(self, other):
+ """ return True if we are 'equal' to this other table (in all respects that matter) """
+ for c in ['index_axes','non_index_axes','values_axes']:
+ if getattr(self,c,None) != getattr(other,c,None):
+ return False
+ return True
+
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
@property
def nrows(self):
return getattr(self.table,'nrows',None)
@@ -1178,6 +1229,15 @@ def description(self):
def axes(self):
return itertools.chain(self.index_axes, self.values_axes)
+ @property
+ def is_transposed(self):
+ return False
+
+ @property
+ def data_orientation(self):
+ """ return a tuple of my permutated axes, non_indexable at the front """
+ return tuple(itertools.chain([ a[0] for a in self.non_index_axes ], [ a.axis for a in self.index_axes ]))
+
def queryables(self):
""" return a dict of the kinds allowable columns for this object """
return dict([ (a.cname,a.kind) for a in self.index_axes ] + [ (self.obj_type._AXIS_NAMES[axis],None) for axis, values in self.non_index_axes ])
@@ -1197,6 +1257,12 @@ def set_attrs(self):
self.attrs.values_cols = self.values_cols()
self.attrs.non_index_axes = self.non_index_axes
+ def validate_version(self, where = None):
+ """ are we trying to operate on an old version? """
+ if where is not None:
+ if self.version is None or float(self.version) < 0.1:
+ warnings.warn("where criteria is being ignored as we this version is too old (or not-defined) [%s]" % self.version, IncompatibilityWarning)
+
def validate(self):
""" raise if we have an incompitable table type with the current """
et = getattr(self.attrs,'table_type',None)
@@ -1243,10 +1309,7 @@ def create_index(self, columns = None, optlevel = None, kind = None):
"""
- table = self.table
- if table is None: return
-
- self.infer_axes()
+ if not self.infer_axes(): return
if columns is None:
columns = [ self.index_axes[0].name ]
@@ -1259,16 +1322,39 @@ def create_index(self, columns = None, optlevel = None, kind = None):
if kind is not None:
kw['kind'] = kind
+ table = self.table
for c in columns:
v = getattr(table.cols,c,None)
- if v is not None and not v.is_indexed:
- v.createIndex(**kw)
+ if v is not None:
+
+ # remove the index if the kind/optlevel have changed
+ if v.is_indexed:
+ index = v.index
+ cur_optlevel = index.optlevel
+ cur_kind = index.kind
+
+ if kind is not None and cur_kind != kind:
+ v.removeIndex()
+ else:
+ kw['kind'] = cur_kind
+
+ if optlevel is not None and cur_optlevel != optlevel:
+ v.removeIndex()
+ else:
+ kw['optlevel'] = cur_optlevel
+
+ # create the index
+ if not v.is_indexed:
+ v.createIndex(**kw)
def read_axes(self, where):
- """ create and return the axes sniffed from the table """
+ """ create and return the axes sniffed from the table: return boolean for success """
+
+ # validate the version
+ self.validate_version(where)
# infer the data kind
- self.infer_axes()
+ if not self.infer_axes(): return False
# create the selection
self.selection = Selection(self, where)
@@ -1278,25 +1364,54 @@ def read_axes(self, where):
for a in self.axes:
a.convert(self.selection)
+ return True
+
def infer_axes(self):
- """ infer the axes from the indexables """
+ """ infer the axes from the indexables:
+ return a boolean indicating if we have a valid table or not """
+
+ table = self.table
+ if table is None:
+ return False
+
self.index_axes, self.values_axes = [ a.infer(self.table) for a in self.indexables if a.is_indexable ], [ a.infer(self.table) for a in self.indexables if not a.is_indexable ]
- self.non_index_axes = getattr(self.attrs,'non_index_axes',None) or []
+ self.non_index_axes = getattr(self.attrs,'non_index_axes',None) or []
- def create_axes(self, axes_to_index, obj, validate = True, min_itemsize = None):
+ return True
+
+ def get_data_blocks(self, obj):
+ """ return the data blocks for this obj """
+ return obj._data.blocks
+
+ def create_axes(self, axes, obj, validate = True, min_itemsize = None):
""" create and return the axes
leagcy tables create an indexable column, indexable index, non-indexable fields
"""
- self.index_axes = []
- self.non_index_axes = []
+ # map axes to numbers
+ axes = set([ obj._get_axis_number(a) for a in axes ])
+
+ # do we have an existing table (if so, use its axes)?
+ if self.infer_axes():
+ existing_table = self.copy()
+ axes = [ a.axis for a in existing_table.index_axes]
+ else:
+ existing_table = None
+
+ # currently support on ndim-1 axes
+ if len(axes) != self.ndim-1:
+ raise Exception("currenctly only support ndim-1 indexers in an AppendableTable")
+
+ # create according to the new data
+ self.index_axes = []
+ self.non_index_axes = []
# create axes to index and non_index
j = 0
for i, a in enumerate(obj.axes):
- if i in axes_to_index:
+ if i in axes:
name = obj._AXIS_NAMES[i]
self.index_axes.append(_convert_index(a).set_name(name).set_axis(i).set_pos(j))
j += 1
@@ -1309,18 +1424,41 @@ def create_axes(self, axes_to_index, obj, validate = True, min_itemsize = None):
for a in self.axes:
a.maybe_set_size(min_itemsize = min_itemsize)
+
+ blocks = self.get_data_blocks(obj)
+
# add my values
self.values_axes = []
- for i, b in enumerate(obj._data.blocks):
+ for i, b in enumerate(blocks):
+
+ # shape of the data column are the indexable axes
+ shape = b.shape[0]
values = b.values
# a string column
if b.dtype.name == 'object':
- atom = _tables().StringCol(itemsize = values.dtype.itemsize, shape = b.shape[0])
- utype = 'S8'
+
+ # itemsize is the maximum length of a string (along any dimension)
+ itemsize = _itemsize_string_array(values)
+
+ # specified min_itemsize?
+ if isinstance(min_itemsize, dict):
+ itemsize = max(int(min_itemsize.get('values')),itemsize)
+
+ # check for column in the values conflicts
+ if existing_table is not None and validate:
+ eci = existing_table.values_axes[i].validate_col(itemsize)
+ if eci > itemsize:
+ itemsize = eci
+
+ atom = _tables().StringCol(itemsize = itemsize, shape = shape)
+ utype = 'S%s' % itemsize
+ kind = 'string'
+
else:
- atom = getattr(_tables(),"%sCol" % b.dtype.name.capitalize())(shape = b.shape[0])
+ atom = getattr(_tables(),"%sCol" % b.dtype.name.capitalize())(shape = shape)
utype = atom._deftype
+ kind = b.dtype.name
# coerce data to this type
try:
@@ -1328,10 +1466,15 @@ def create_axes(self, axes_to_index, obj, validate = True, min_itemsize = None):
except (Exception), detail:
raise Exception("cannot coerce data type -> [dtype->%s]" % b.dtype.name)
- dc = DataCol.create_for_block(i = i, values = list(b.items), kind = b.dtype.name, typ = atom, data = values, pos = j)
+ dc = DataCol.create_for_block(i = i, values = list(b.items), kind = kind, typ = atom, data = values, pos = j)
j += 1
self.values_axes.append(dc)
+ # validate the axes if we have an existing table
+ if existing_table is not None:
+ if self != existing_table:
+ raise Exception("try to write axes [%s] that are invalid to an existing table [%s]!" % (axes,self.group))
+
def create_description(self, compression = None, complevel = None):
""" create the description of the table from the axes & values """
@@ -1392,10 +1535,11 @@ class LegacyTable(Table):
that can be easily searched
"""
- _indexables = [IndexCol(name = 'index', axis = 0, pos = 0),
- IndexCol(name = 'column', axis = 1, pos = 1, index_kind = 'columns_kind'),
+ _indexables = [IndexCol(name = 'index', axis = 1, pos = 0),
+ IndexCol(name = 'column', axis = 2, pos = 1, index_kind = 'columns_kind'),
DataCol( name = 'fields', cname = 'values', kind_attr = 'fields', pos = 2) ]
table_type = 'legacy'
+ ndim = 3
def write(self, **kwargs):
raise Exception("write operations are not allowed on legacy tables!")
@@ -1403,8 +1547,13 @@ def write(self, **kwargs):
def read(self, where=None):
""" we have n indexable columns, with an arbitrary number of data axes """
- self.read_axes(where)
+
+ _dm = create_debug_memory(self.parent)
+ _dm('start')
+
+ if not self.read_axes(where): return None
+ _dm('read_axes')
indicies = [ i.values for i in self.index_axes ]
factors = [ Factor.from_array(i) for i in indicies ]
levels = [ f.levels for f in factors ]
@@ -1424,14 +1573,23 @@ def read(self, where=None):
for c in self.values_axes:
# the data need to be sorted
- sorted_values = c.data.take(sorter, axis=0)
+ sorted_values = c.take_data().take(sorter, axis=0)
take_labels = [ l.take(sorter) for l in labels ]
items = Index(c.values)
+ _dm('pre block')
+ block = block2d_to_blocknd(sorted_values, items, tuple(N), take_labels)
+ _dm('block created done')
- block = block2d_to_blocknd(sorted_values, items, tuple(N), take_labels)
+ # create the object
mgr = BlockManager([block], [items] + levels)
- objs.append(self.obj_type(mgr))
+ obj = self.obj_type(mgr)
+
+ # permute if needed
+ if self.is_transposed:
+ obj = obj.transpose(*self.data_orientation)
+
+ objs.append(obj)
else:
if not self._quiet: # pragma: no cover
@@ -1459,16 +1617,28 @@ def read(self, where=None):
lp = DataFrame(new_values, index=new_index, columns=lp.columns)
objs.append(lp.to_panel())
+ _dm('pre-concat')
+
# create the composite object
- wp = concat(objs, axis = 0, verify_integrity = True)
+ if len(objs) == 1:
+ wp = objs[0]
+ else:
+ wp = concat(objs, axis = 0, verify_integrity = True)
+
+ _dm('post-concat')
# reorder by any non_index_axes
for axis,labels in self.non_index_axes:
wp = wp.reindex_axis(labels,axis=axis,copy=False)
+ # apply the selection filters (but keep in the same order)
if self.selection.filter:
- new_minor = sorted(set(wp.minor_axis) & self.selection.filter)
- wp = wp.reindex(minor=new_minor, copy = False)
+ filter_axis_name = wp._get_axis_name(self.non_index_axes[0][0])
+ ordered = getattr(wp,filter_axis_name)
+ new_axis = sorted(ordered & self.selection.filter)
+ wp = wp.reindex(**{ filter_axis_name : new_axis, 'copy' : False })
+
+ _dm('done')
return wp
@@ -1489,7 +1659,7 @@ class AppendableTable(LegacyTable):
_indexables = None
table_type = 'appendable'
- def write(self, axes_to_index, obj, append=False, compression=None,
+ def write(self, axes, obj, append=False, compression=None,
complevel=None, min_itemsize = None, **kwargs):
# create the table if it doesn't exist (or get it if it does)
@@ -1498,7 +1668,7 @@ def write(self, axes_to_index, obj, append=False, compression=None,
self.handle.removeNode(self.group, 'table')
# create the axes
- self.create_axes(axes_to_index = axes_to_index, obj = obj, validate = append, min_itemsize = min_itemsize)
+ self.create_axes(axes = axes, obj = obj, validate = append, min_itemsize = min_itemsize)
if 'table' not in self.group:
@@ -1530,9 +1700,8 @@ def write(self, axes_to_index, obj, append=False, compression=None,
def write_data(self):
""" fast writing of data: requires specific cython routines each axis shape """
+ # create the masks & values
masks = []
-
- # create the masks
for a in self.values_axes:
# figure the mask: only do if we can successfully process this column, otherwise ignore the mask
@@ -1549,7 +1718,7 @@ def write_data(self):
for m in masks[1:]:
m = mask & m
- # the arguments & values
+ # the arguments
args = [ a.cvalues for a in self.index_axes ]
values = [ a.data for a in self.values_axes ]
@@ -1565,14 +1734,18 @@ def write_data(self):
raise Exception("tables cannot write this data -> %s" % str(detail))
def delete(self, where = None):
- if where is None:
- return super(LegacyTable, self).delete()
+
+ # delete all rows (and return the nrows)
+ if where is None or not len(where):
+ nrows = self.nrows
+ self.handle.removeNode(self.group, recursive=True)
+ return nrows
# infer the data kind
- table = self.table
- self.infer_axes()
+ if not self.infer_axes(): return None
# create the selection
+ table = self.table
self.selection = Selection(self, where)
self.selection.select_coords()
@@ -1603,32 +1776,53 @@ class AppendableFrameTable(AppendableTable):
ndim = 2
obj_type = DataFrame
+ @property
+ def is_transposed(self):
+ return self.index_axes[0].axis == 1
+
+ def get_data_blocks(self, obj):
+ """ these are written transposed """
+ if self.is_transposed:
+ obj = obj.T
+ return obj._data.blocks
+
def read(self, where=None):
- self.read_axes(where)
+ if not self.read_axes(where): return None
index = Index(self.index_axes[0].values)
frames = []
for a in self.values_axes:
columns = Index(a.values)
- block = make_block(a.cvalues.T, columns, columns)
- mgr = BlockManager([ block ], [ columns, index ])
+
+ if self.is_transposed:
+ values = a.cvalues
+ index_ = columns
+ columns_ = index
+ else:
+ values = a.cvalues.T
+ index_ = index
+ columns_ = columns
+
+ block = make_block(values, columns_, columns_)
+ mgr = BlockManager([ block ], [ columns_, index_ ])
frames.append(DataFrame(mgr))
df = concat(frames, axis = 1, verify_integrity = True)
# sort the indicies & reorder the columns
for axis,labels in self.non_index_axes:
df = df.reindex_axis(labels,axis=axis,copy=False)
- columns_ordered = df.columns
- # apply the column filter (but keep columns in the same order)
- if self.selection.filter:
- columns = Index(set(columns_ordered) & self.selection.filter)
- columns = sorted(columns_ordered.get_indexer(columns))
- df = df.reindex(columns = columns_ordered.take(columns), copy = False)
+ # apply the selection filters (but keep in the same order)
+ filter_axis_name = df._get_axis_name(self.non_index_axes[0][0])
+ ordered = getattr(df,filter_axis_name)
+ if self.selection.filter:
+ ordd = ordered & self.selection.filter
+ ordd = sorted(ordered.get_indexer(ordd))
+ df = df.reindex(**{ filter_axis_name : ordered.take(ordd), 'copy' : False })
else:
- df = df.reindex(columns = columns_ordered, copy = False)
+ df = df.reindex(**{ filter_axis_name : ordered , 'copy' : False })
return df
@@ -1638,6 +1832,16 @@ class AppendablePanelTable(AppendableTable):
ndim = 3
obj_type = Panel
+ def get_data_blocks(self, obj):
+ """ these are written transposed """
+ if self.is_transposed:
+ obj = obj.transpose(*self.data_orientation)
+ return obj._data.blocks
+
+ @property
+ def is_transposed(self):
+ return self.data_orientation != tuple(range(self.ndim))
+
class AppendableNDimTable(AppendablePanelTable):
""" suppor the new appendable table formats """
table_type = 'appendable_ndim'
@@ -1659,7 +1863,7 @@ def create_table(parent, group, typ = None, **kwargs):
""" return a suitable Table class to operate """
pt = getattr(group._v_attrs,'pandas_type',None)
- tt = getattr(group._v_attrs,'table_type',None)
+ tt = getattr(group._v_attrs,'table_type',None) or typ
# a new node
if pt is None:
@@ -1672,7 +1876,8 @@ def create_table(parent, group, typ = None, **kwargs):
# distiguish between a frame/table
tt = 'legacy_panel'
try:
- if group.table.description.values.shape[0] == 1:
+ fields = group.table._v_attrs.fields
+ if len(fields) == 1 and fields[0] == 'value':
tt = 'legacy_frame'
except:
pass
@@ -1680,6 +1885,10 @@ def create_table(parent, group, typ = None, **kwargs):
return _TABLE_MAP.get(tt)(parent, group, **kwargs)
+def _itemsize_string_array(arr):
+ """ return the maximum size of elements in a strnig array """
+ return max([ str_len(arr[v]).max() for v in range(arr.shape[0]) ])
+
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
@@ -1957,8 +2166,11 @@ def eval(self):
if not self.is_valid:
raise Exception("query term is not valid [%s]" % str(self))
- # convert values
- values = [ self.convert_value(v) for v in self.value ]
+ # convert values if we are in the table
+ if self.is_in_table:
+ values = [ self.convert_value(v) for v in self.value ]
+ else:
+ values = [ [v, v] for v in self.value ]
# equality conditions
if self.op in ['=','!=']:
@@ -1983,14 +2195,24 @@ def eval(self):
self.condition = '(%s %s %s)' % (self.field, self.op, values[0][0])
+ else:
+
+ raise Exception("passing a filterable condition to a non-table indexer [%s]" % str(self))
+
def convert_value(self, v):
#### a little hacky here, need to really figure out what we should convert ####x
if self.field == 'index' or self.field == 'major_axis':
if self.kind == 'datetime64' :
return [lib.Timestamp(v).value, None]
- elif isinstance(v, datetime):
+ elif isinstance(v, datetime) or hasattr(v,'timetuple') or self.kind == 'date':
return [time.mktime(v.timetuple()), None]
+ elif self.kind == 'integer':
+ v = int(float(v))
+ return [v, v]
+ elif self.kind == 'float':
+ v = float(v)
+ return [v, v]
elif not isinstance(v, basestring):
return [str(v), None]
@@ -2052,7 +2274,7 @@ def select_coords(self):
"""
generate the selection
"""
- self.values = self.table.table.getWhereList(self.condition)
+ self.values = self.table.table.getWhereList(self.condition, sort = True)
def _get_index_factory(klass):
@@ -2062,3 +2284,23 @@ def f(values, freq=None, tz=None):
tz=tz)
return f
return klass
+
+def create_debug_memory(parent):
+ _debug_memory = getattr(parent,'_debug_memory',False)
+ def get_memory(s):
+ pass
+
+ if not _debug_memory:
+ pass
+ else:
+ try:
+ import psutil, os
+ def get_memory(s):
+ p = psutil.Process(os.getpid())
+ (rss,vms) = p.get_memory_info()
+ mp = p.get_memory_percent()
+ print "[%s] cur_mem->%.2f (MB),per_mem->%.2f" % (s,rss/1000000.0,mp)
+ except:
+ pass
+
+ return get_memory
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 7ecb0bc2fd5ee..a047109e509a9 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -2,13 +2,14 @@
import unittest
import os
import sys
+import warnings
from datetime import datetime
import numpy as np
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, Index)
-from pandas.io.pytables import HDFStore, get_store, Term
+from pandas.io.pytables import HDFStore, get_store, Term, IncompatibilityWarning
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
@@ -85,6 +86,50 @@ def test_contains(self):
self.assert_('/foo/b' not in self.store)
self.assert_('bar' not in self.store)
+ def test_versioning(self):
+ self.store['a'] = tm.makeTimeSeries()
+ self.store['b'] = tm.makeDataFrame()
+ df = tm.makeTimeDataFrame()
+ self.store.remove('df1')
+ self.store.append('df1', df[:10])
+ self.store.append('df1', df[10:])
+ self.assert_(self.store.root.a._v_attrs.pandas_version == '0.10')
+ self.assert_(self.store.root.b._v_attrs.pandas_version == '0.10')
+ self.assert_(self.store.root.df1._v_attrs.pandas_version == '0.10')
+
+ # write a file and wipe its versioning
+ self.store.remove('df2')
+ self.store.append('df2', df)
+ self.store.get_node('df2')._v_attrs.pandas_version = None
+ self.store.select('df2')
+ self.store.select('df2', [ Term('index','>',df.index[2]) ])
+
+ def test_meta(self):
+ raise nose.SkipTest('no meta')
+
+ meta = { 'foo' : [ 'I love pandas ' ] }
+ s = tm.makeTimeSeries()
+ s.meta = meta
+ self.store['a'] = s
+ self.assert_(self.store['a'].meta == meta)
+
+ df = tm.makeDataFrame()
+ df.meta = meta
+ self.store['b'] = df
+ self.assert_(self.store['b'].meta == meta)
+
+ # this should work, but because slicing doesn't propgate meta it doesn
+ self.store.remove('df1')
+ self.store.append('df1', df[:10])
+ self.store.append('df1', df[10:])
+ results = self.store['df1']
+ #self.assert_(getattr(results,'meta',None) == meta)
+
+ # no meta
+ df = tm.makeDataFrame()
+ self.store['b'] = df
+ self.assert_(hasattr(self.store['b'],'meta') == False)
+
def test_reopen_handle(self):
self.store['a'] = tm.makeTimeSeries()
self.store.open('w', warn=False)
@@ -131,6 +176,29 @@ def test_put(self):
self.store.put('c', df[:10], table=True, append=False)
tm.assert_frame_equal(df[:10], self.store['c'])
+ def test_put_string_index(self):
+
+ index = Index([ "I am a very long string index: %s" % i for i in range(20) ])
+ s = Series(np.arange(20), index = index)
+ df = DataFrame({ 'A' : s, 'B' : s })
+
+ self.store['a'] = s
+ tm.assert_series_equal(self.store['a'], s)
+
+ self.store['b'] = df
+ tm.assert_frame_equal(self.store['b'], df)
+
+ # mixed length
+ index = Index(['abcdefghijklmnopqrstuvwxyz1234567890'] + [ "I am a very long string index: %s" % i for i in range(20) ])
+ s = Series(np.arange(21), index = index)
+ df = DataFrame({ 'A' : s, 'B' : s })
+ self.store['a'] = s
+ tm.assert_series_equal(self.store['a'], s)
+
+ self.store['b'] = df
+ tm.assert_frame_equal(self.store['b'], df)
+
+
def test_put_compression(self):
df = tm.makeTimeDataFrame()
@@ -158,50 +226,106 @@ def test_put_integer(self):
self._check_roundtrip(df, tm.assert_frame_equal)
def test_append(self):
- pth = '__test_append__.h5'
- try:
- store = HDFStore(pth)
-
- df = tm.makeTimeDataFrame()
- store.append('df1', df[:10])
- store.append('df1', df[10:])
- tm.assert_frame_equal(store['df1'], df)
-
- store.put('df2', df[:10], table=True)
- store.append('df2', df[10:])
- tm.assert_frame_equal(store['df2'], df)
-
- store.append('/df3', df[:10])
- store.append('/df3', df[10:])
- tm.assert_frame_equal(store['df3'], df)
-
- # this is allowed by almost always don't want to do it
- import warnings
- import tables
- warnings.filterwarnings('ignore', category=tables.NaturalNameWarning)
- store.append('/df3 foo', df[:10])
- store.append('/df3 foo', df[10:])
- tm.assert_frame_equal(store['df3 foo'], df)
- warnings.filterwarnings('always', category=tables.NaturalNameWarning)
-
- # panel
- wp = tm.makePanel()
- store.append('wp1', wp.ix[:,:10,:])
- store.append('wp1', wp.ix[:,10:,:])
- tm.assert_panel_equal(store['wp1'], wp)
-
- # ndim
- p4d = tm.makePanel4D()
- store.append('p4d', p4d.ix[:,:,:10,:])
- store.append('p4d', p4d.ix[:,:,10:,:])
- tm.assert_panel4d_equal(store['p4d'], p4d)
-
- except:
- raise
- finally:
- store.close()
- os.remove(pth)
+ df = tm.makeTimeDataFrame()
+ self.store.remove('df1')
+ self.store.append('df1', df[:10])
+ self.store.append('df1', df[10:])
+ tm.assert_frame_equal(self.store['df1'], df)
+
+ self.store.remove('df2')
+ self.store.put('df2', df[:10], table=True)
+ self.store.append('df2', df[10:])
+ tm.assert_frame_equal(self.store['df2'], df)
+
+ self.store.remove('df3')
+ self.store.append('/df3', df[:10])
+ self.store.append('/df3', df[10:])
+ tm.assert_frame_equal(self.store['df3'], df)
+
+ # this is allowed by almost always don't want to do it
+ warnings.filterwarnings('ignore', category=tables.NaturalNameWarning)
+ self.store.remove('/df3 foo')
+ self.store.append('/df3 foo', df[:10])
+ self.store.append('/df3 foo', df[10:])
+ tm.assert_frame_equal(self.store['df3 foo'], df)
+ warnings.filterwarnings('always', category=tables.NaturalNameWarning)
+
+ # panel
+ wp = tm.makePanel()
+ self.store.remove('wp1')
+ self.store.append('wp1', wp.ix[:,:10,:])
+ self.store.append('wp1', wp.ix[:,10:,:])
+ tm.assert_panel_equal(self.store['wp1'], wp)
+
+ # ndim
+ p4d = tm.makePanel4D()
+ self.store.remove('p4d')
+ self.store.append('p4d', p4d.ix[:,:,:10,:])
+ self.store.append('p4d', p4d.ix[:,:,10:,:])
+ tm.assert_panel4d_equal(self.store['p4d'], p4d)
+
+ # test using axis labels
+ self.store.remove('p4d')
+ self.store.append('p4d', p4d.ix[:,:,:10,:], axes=['items','major_axis','minor_axis'])
+ self.store.append('p4d', p4d.ix[:,:,10:,:], axes=['items','major_axis','minor_axis'])
+ tm.assert_panel4d_equal(self.store['p4d'], p4d)
+
+ def test_append_frame_column_oriented(self):
+
+ # column oriented
+ df = tm.makeTimeDataFrame()
+ self.store.remove('df1')
+ self.store.append('df1', df.ix[:,:2], axes = ['columns'])
+ self.store.append('df1', df.ix[:,2:])
+ tm.assert_frame_equal(self.store['df1'], df)
+
+ result = self.store.select('df1', 'columns=A')
+ expected = df.reindex(columns=['A'])
+ tm.assert_frame_equal(expected, result)
+
+ # this isn't supported
+ self.assertRaises(Exception, self.store.select, 'df1', ('columns=A', Term('index','>',df.index[4])))
+
+ # selection on the non-indexable
+ result = self.store.select('df1', ('columns=A', Term('index','=',df.index[0:4])))
+ expected = df.reindex(columns=['A'],index=df.index[0:4])
+ tm.assert_frame_equal(expected, result)
+
+ def test_ndim_indexables(self):
+ """ test using ndim tables in new ways"""
+
+ p4d = tm.makePanel4D()
+
+ # append then change (will take existing schema)
+ self.store.remove('p4d')
+ self.store.append('p4d', p4d.ix[:,:,:10,:], axes=['items','major_axis','minor_axis'])
+ self.store.append('p4d', p4d.ix[:,:,10:,:], axes=['labels','items','major_axis'])
+
+ # pass incorrect number of axes
+ self.store.remove('p4d')
+ self.assertRaises(Exception, self.store.append, 'p4d', p4d.ix[:,:,:10,:], axes=['major_axis','minor_axis'])
+
+ # different than default indexables
+ self.store.remove('p4d')
+ self.store.append('p4d', p4d.ix[:,:,:10,:], axes=[0,2,3])
+ self.store.append('p4d', p4d.ix[:,:,10:,:], axes=[0,2,3])
+ tm.assert_panel4d_equal(self.store['p4d'], p4d)
+
+ # partial selection
+ result = self.store.select('p4d',['labels=l1'])
+ expected = p4d.reindex(labels = ['l1'])
+ tm.assert_panel4d_equal(result, expected)
+
+ # partial selection2
+ result = self.store.select('p4d',[Term('labels=l1'), Term('items=ItemA'), Term('minor_axis=B')])
+ expected = p4d.reindex(labels = ['l1'], items = ['ItemA'], minor_axis = ['B'])
+ tm.assert_panel4d_equal(result, expected)
+
+ # non-existant partial selection
+ result = self.store.select('p4d',[Term('labels=l1'), Term('items=Item1'), Term('minor_axis=B')])
+ expected = p4d.reindex(labels = ['l1'], items = [], minor_axis = ['B'])
+ tm.assert_panel4d_equal(result, expected)
def test_append_with_strings(self):
wp = tm.makePanel()
@@ -228,6 +352,27 @@ def test_append_with_strings(self):
self.store.append('s4', wp)
self.assertRaises(Exception, self.store.append, 's4', wp2)
+ # avoid truncation on elements
+ df = DataFrame([[123,'asdqwerty'], [345,'dggnhebbsdfbdfb']])
+ self.store.append('df_big',df, min_itemsize = { 'values' : 1024 })
+ tm.assert_frame_equal(self.store.select('df_big'), df)
+
+ # appending smaller string ok
+ df2 = DataFrame([[124,'asdqy'], [346,'dggnhefbdfb']])
+ self.store.append('df_big',df2)
+ expected = concat([ df, df2 ])
+ tm.assert_frame_equal(self.store.select('df_big'), expected)
+
+ # avoid truncation on elements
+ df = DataFrame([[123,'asdqwerty'], [345,'dggnhebbsdfbdfb']])
+ self.store.append('df_big2',df, min_itemsize = { 'values' : 10 })
+ tm.assert_frame_equal(self.store.select('df_big2'), df)
+
+ # bigger string on next append
+ self.store.append('df_new',df, min_itemsize = { 'values' : 16 })
+ df_new = DataFrame([[124,'abcdefqhij'], [346, 'abcdefghijklmnopqrtsuvwxyz']])
+ self.assertRaises(Exception, self.store.append, 'df_new',df_new)
+
def test_create_table_index(self):
wp = tm.makePanel()
self.store.append('p5', wp)
@@ -236,14 +381,29 @@ def test_create_table_index(self):
assert(self.store.handle.root.p5.table.cols.major_axis.is_indexed == True)
assert(self.store.handle.root.p5.table.cols.minor_axis.is_indexed == False)
+ # default optlevels
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 6)
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'medium')
+
+ # let's change the indexing scheme
+ self.store.create_table_index('p5')
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 6)
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'medium')
+ self.store.create_table_index('p5', optlevel=9)
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 9)
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'medium')
+ self.store.create_table_index('p5', kind='full')
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 9)
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'full')
+ self.store.create_table_index('p5', optlevel=1, kind='light')
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.optlevel == 1)
+ assert(self.store.handle.root.p5.table.cols.major_axis.index.kind == 'light')
+
df = tm.makeTimeDataFrame()
self.store.append('f', df[:10])
self.store.append('f', df[10:])
self.store.create_table_index('f')
- # create twice
- self.store.create_table_index('f')
-
# try to index a non-table
self.store.put('f2', df)
self.assertRaises(Exception, self.store.create_table_index, 'f2')
@@ -253,6 +413,43 @@ def test_create_table_index(self):
pytables._table_supports_index = False
self.assertRaises(Exception, self.store.create_table_index, 'f')
+ # test out some versions
+ original = tables.__version__
+
+ for v in ['2.2','2.2b']:
+ pytables._table_mod = None
+ pytables._table_supports_index = False
+ tables.__version__ = v
+ self.assertRaises(Exception, self.store.create_table_index, 'f')
+
+ for v in ['2.3.1','2.3.1b','2.4dev','2.4',original]:
+ pytables._table_mod = None
+ pytables._table_supports_index = False
+ tables.__version__ = v
+ self.store.create_table_index('f')
+ pytables._table_mod = None
+ pytables._table_supports_index = False
+ tables.__version__ = original
+
+
+ def test_big_table(self):
+ raise nose.SkipTest('no big table')
+
+ # create and write a big table
+ wp = Panel(np.random.randn(20, 1000, 1000), items= [ 'Item%s' % i for i in xrange(20) ],
+ major_axis=date_range('1/1/2000', periods=1000), minor_axis = [ 'E%s' % i for i in xrange(1000) ])
+
+ wp.ix[:,100:200,300:400] = np.nan
+
+ try:
+ store = HDFStore(self.scratchpath)
+ store._debug_memory = True
+ store.append('wp',wp)
+ recons = store.select('wp')
+ finally:
+ store.close()
+ os.remove(self.scratchpath)
+
def test_append_diff_item_order(self):
wp = tm.makePanel()
wp1 = wp.ix[:, :10, :]
@@ -317,6 +514,21 @@ def _make_one_panel():
self.store.append('p1_mixed', p1)
tm.assert_panel_equal(self.store.select('p1_mixed'), p1)
+ # ndim
+ def _make_one_p4d():
+ wp = tm.makePanel4D()
+ wp['obj1'] = 'foo'
+ wp['obj2'] = 'bar'
+ wp['bool1'] = wp['l1'] > 0
+ wp['bool2'] = wp['l2'] > 0
+ wp['int1'] = 1
+ wp['int2'] = 2
+ return wp.consolidate()
+
+ p4d = _make_one_p4d()
+ self.store.append('p4d_mixed', p4d)
+ tm.assert_panel4d_equal(self.store.select('p4d_mixed'), p4d)
+
def test_remove(self):
ts = tm.makeTimeSeries()
df = tm.makeDataFrame()
@@ -366,7 +578,10 @@ def test_remove_where(self):
# empty where
self.store.remove('wp')
self.store.put('wp', wp, table=True)
- self.store.remove('wp', [])
+
+ # deleted number (entire table)
+ n = self.store.remove('wp', [])
+ assert(n == 120)
# non - empty where
self.store.remove('wp')
@@ -375,9 +590,9 @@ def test_remove_where(self):
'wp', ['foo'])
# selectin non-table with a where
- self.store.put('wp2', wp, table=False)
- self.assertRaises(Exception, self.store.remove,
- 'wp2', [('column', ['A', 'D'])])
+ #self.store.put('wp2', wp, table=False)
+ #self.assertRaises(Exception, self.store.remove,
+ # 'wp2', [('column', ['A', 'D'])])
def test_remove_crit(self):
@@ -387,8 +602,14 @@ def test_remove_crit(self):
crit1 = Term('major_axis','>',date)
crit2 = Term('minor_axis',['A', 'D'])
- self.store.remove('wp', where=[crit1])
- self.store.remove('wp', where=[crit2])
+ n = self.store.remove('wp', where=[crit1])
+
+ # deleted number
+ assert(n == 56)
+
+ n = self.store.remove('wp', where=[crit2])
+ assert(n == 32)
+
result = self.store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
tm.assert_panel_equal(result, expected)
@@ -447,8 +668,8 @@ def test_terms(self):
tm.assert_panel_equal(result, expected)
# p4d
- result = self.store.select('p4d',[ Term('major_axis<20000108'), Term('minor_axis', '=', ['A','B']) ])
- expected = p4d.truncate(after='20000108').reindex(minor=['A', 'B'])
+ result = self.store.select('p4d',[ Term('major_axis<20000108'), Term('minor_axis', '=', ['A','B']), Term('items', '=', ['ItemA','ItemB']) ])
+ expected = p4d.truncate(after='20000108').reindex(minor=['A', 'B'],items=['ItemA','ItemB'])
tm.assert_panel4d_equal(result, expected)
# valid terms
@@ -464,12 +685,23 @@ def test_terms(self):
(('minor_axis', ['A','B']),),
(('minor_axis', ['A','B']),),
((('minor_axis', ['A','B']),),),
+ (('items', ['ItemA','ItemB']),),
+ ('items=ItemA'),
]
for t in terms:
self.store.select('wp', t)
self.store.select('p4d', t)
+ # valid for p4d only
+ terms = [
+ (('labels', '=', ['l1','l2']),),
+ Term('labels', '=', ['l1','l2']),
+ ]
+
+ for t in terms:
+ self.store.select('p4d', t)
+
def test_series(self):
s = tm.makeStringSeries()
self._check_roundtrip(s, tm.assert_series_equal)
@@ -797,8 +1029,8 @@ def test_select(self):
self.store.select('wp2')
# selectin non-table with a where
- self.assertRaises(Exception, self.store.select,
- 'wp2', ('column', ['A', 'D']))
+ #self.assertRaises(Exception, self.store.select,
+ # 'wp2', ('column', ['A', 'D']))
def test_panel_select(self):
wp = tm.makePanel()
@@ -833,10 +1065,21 @@ def test_frame_select(self):
expected = df.ix[:, ['A']]
tm.assert_frame_equal(result, expected)
+ # other indicies for a frame
+
+ # integer
+ df = DataFrame(dict(A = np.random.rand(20), B = np.random.rand(20)))
+ self.store.append('df_int', df)
+ self.store.select('df_int', [ Term("index<10"), Term("columns", "=", ["A"]) ])
+
+ df = DataFrame(dict(A = np.random.rand(20), B = np.random.rand(20), index = np.arange(20,dtype='f8')))
+ self.store.append('df_float', df)
+ self.store.select('df_float', [ Term("index<10.0"), Term("columns", "=", ["A"]) ])
+
# can't select if not written as table
- self.store['frame'] = df
- self.assertRaises(Exception, self.store.select,
- 'frame', [crit1, crit2])
+ #self.store['frame'] = df
+ #self.assertRaises(Exception, self.store.select,
+ # 'frame', [crit1, crit2])
def test_select_filter_corner(self):
df = DataFrame(np.random.randn(50, 100))
@@ -911,6 +1154,16 @@ def test_legacy_table_read(self):
store.select('df1')
store.select('df2')
store.select('wp1')
+
+ # force the frame
+ store.select('df2', typ = 'legacy_frame')
+
+ # old version (this still throws an exception though)
+ import warnings
+ warnings.filterwarnings('ignore', category=IncompatibilityWarning)
+ self.assertRaises(Exception, store.select, 'wp1', Term('minor_axis','=','B'))
+ warnings.filterwarnings('always', category=IncompatibilityWarning)
+
store.close()
def test_legacy_table_write(self):
| - ENH: changes axes_to_index keyword in table creation to axes
- ENH: allow passing of numeric or named axes (e.g. 0 or 'minor_axis') in axes
- BUG: create_axes now checks for current table scheme; raises if this indexing scheme is violated
- TST: added many p4d tests for appending/selection/partial selection/and axis permuation
- TST: added addition Term tests to include p4d
- BUG: add **eq** operators to IndexCol/DataCol/Table to comparisons
- DOC: updated docs with Panel4D saving & issues relating to threading
- ENH: supporting non-regular indexables:
- can index a Panel4D on say [labels,major_axis,minor_axis], rather
than the default of [items,major_axis,minor_axis]
- support column oriented DataFrames (e.g. queryable by the columns
- REMOVED: support metadata serializing via 'meta' attribute on an object
- ENH: supporting a data versioning scheme, warns if this is non-existant or old (only if trying a query)
as soon as the data is written again, this won't be warned again
- BUG: ixed PyTables version checking (and now correctly raises the error msg)
- BUG/DOC: added min_itemsize for strings in the values block (added TST and BUG fixes)
- BUG: handle int/float in indexes in the Term specification correctly
- ENH/DOC: allow changing of the table indexing scheme by calling create_table_index with new parameters
turned off memory testing: enable big_table test to turn it on
| https://api.github.com/repos/pandas-dev/pandas/pulls/2497 | 2012-12-12T03:16:49Z | 2012-12-13T22:57:00Z | 2012-12-13T22:57:00Z | 2014-06-22T17:19:33Z |
DOC: broken display in 2 places in Updated PyTables Support section | diff --git a/RELEASE.rst b/RELEASE.rst
index 26f3265029696..4c18ea3af84c8 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -84,7 +84,6 @@ pandas 0.10.0
- Add support for Panel4D, a named 4 Dimensional stucture
- Add support for ndpanel factory functions, to create custom,
domain-specific N-Dimensional containers
- - 'density' property in `SparseSeries` (#2384)
**API Changes**
diff --git a/doc/source/io.rst b/doc/source/io.rst
index b8d310a920533..bbf473628cafb 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1158,6 +1158,13 @@ Notes & Caveats
store.append('wp_big_strings', wp)
store.select('wp_big_strings')
+Compatibility
+~~~~~~~~~~~~~
+
+0.10 of ``HDFStore`` is backwards compatible for reading tables created in a prior version of pandas,
+however, query terms using the prior (undocumented) methodology are unsupported. You must read in the entire
+file and write it out using the new format to take advantage of the updates.
+
Performance
~~~~~~~~~~~
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index c23aeb9e04406..4e044d41bc331 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -161,7 +161,6 @@ Updated PyTables Support
:ref:`Docs <io-hdf5>` for PyTables ``Table`` format & several enhancements to the api. Here is a taste of what to expect.
-
.. ipython:: python
:suppress:
:okexcept:
@@ -221,7 +220,7 @@ Updated PyTables Support
# remove all nodes under this level
store.remove('food')
- store
+ store
- added mixed-dtype support!
@@ -230,7 +229,8 @@ Updated PyTables Support
df['string'] = 'string'
df['int'] = 1
store.append('df',df)
- df1 = store.select('df') df1
+ df1 = store.select('df')
+ df1
df1.get_dtype_counts()
- performance improvments on table writing
@@ -258,6 +258,12 @@ Updated PyTables Support
import os
os.remove('store.h5')
+**Compatibility**
+
+0.10 of ``HDFStore`` is backwards compatible for reading tables created in a prior version of pandas,
+however, query terms using the prior (undocumented) methodology are unsupported. You must read in the entire
+file and write it out using the new format to take advantage of the updates.
+
N Dimensional Panels (Experimental)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| also..have a hanging ref in N Dimensional Panels (Experimental) section to the dsintro-panel4d docs which is not working....thanks!
| https://api.github.com/repos/pandas-dev/pandas/pulls/2495 | 2012-12-11T22:06:51Z | 2012-12-13T14:58:37Z | null | 2014-06-27T13:41:32Z |
more pprint escaping | diff --git a/pandas/core/common.py b/pandas/core/common.py
index e087075196c78..d77d413d39571 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1221,13 +1221,13 @@ def in_qtconsole():
# working with straight ascii.
-def _pprint_seq(seq, _nest_lvl=0):
+def _pprint_seq(seq, _nest_lvl=0, **kwds):
"""
internal. pprinter for iterables. you should probably use pprint_thing()
rather then calling this directly.
"""
fmt = u"[%s]" if hasattr(seq, '__setitem__') else u"(%s)"
- return fmt % ", ".join(pprint_thing(e, _nest_lvl + 1) for e in seq)
+ return fmt % ", ".join(pprint_thing(e, _nest_lvl + 1, **kwds) for e in seq)
def _pprint_dict(seq, _nest_lvl=0):
"""
@@ -1243,7 +1243,7 @@ def _pprint_dict(seq, _nest_lvl=0):
return fmt % ", ".join(pairs)
-def pprint_thing(thing, _nest_lvl=0):
+def pprint_thing(thing, _nest_lvl=0, escape_chars=None):
"""
This function is the sanctioned way of converting objects
to a unicode representation.
@@ -1274,7 +1274,7 @@ def pprint_thing(thing, _nest_lvl=0):
result = _pprint_dict(thing, _nest_lvl)
elif _is_sequence(thing) and _nest_lvl < \
get_option("print.pprint_nest_depth"):
- result = _pprint_seq(thing, _nest_lvl)
+ result = _pprint_seq(thing, _nest_lvl, escape_chars=escape_chars)
else:
# when used internally in the package, everything
# should be unicode text. However as an aid to transition
@@ -1290,17 +1290,23 @@ def pprint_thing(thing, _nest_lvl=0):
# either utf-8 or we replace errors
result = str(thing).decode('utf-8', "replace")
- result=result.replace("\t",r'\t') # escape tabs
+ translate = {'\t': r'\t',
+ '\n': r'\n',
+ '\r': r'\r',
+ }
+ escape_chars = escape_chars or tuple()
+ for c in escape_chars:
+ result=result.replace(c,translate[c])
return unicode(result) # always unicode
-def pprint_thing_encoded(object, encoding='utf-8', errors='replace'):
+def pprint_thing_encoded(object, encoding='utf-8', errors='replace', **kwds):
value = pprint_thing(object) # get unicode representation of object
- return value.encode(encoding, errors)
+ return value.encode(encoding, errors,**kwds)
-def console_encode(object):
+def console_encode(object, **kwds):
"""
this is the sanctioned way to prepare something for
sending *to the console*, it delegates to pprint_thing() to get
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 152529716934b..771ad7aeeb661 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -86,7 +86,8 @@ def _get_footer(self):
if footer and self.series.name is not None:
footer += ', '
- series_name = com.pprint_thing(self.series.name)
+ series_name = com.pprint_thing(self.series.name,
+ escape_chars=('\t','\r','\n'))
footer += ("Name: %s" % series_name) if self.series.name is not None else ""
if self.length:
@@ -1029,7 +1030,8 @@ def _format_strings(self):
else:
float_format = self.float_format
- formatter = com.pprint_thing if self.formatter is None else self.formatter
+ formatter = (lambda x: com.pprint_thing(x,escape_chars=('\t','\r','\n'))) \
+ if self.formatter is None else self.formatter
def _format(x):
if self.na_rep is not None and lib.checknull(x):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f8a7f6ed90f7e..79027f304c008 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -642,7 +642,8 @@ def __bytes__(self):
Invoked by bytes(df) in py3 only.
Yields a bytestring in both py2/py3.
"""
- return com.console_encode(self.__unicode__())
+ encoding = com.get_option("print.encoding")
+ return self.__unicode__().encode(encoding , 'replace')
def __unicode__(self):
"""
diff --git a/pandas/core/index.py b/pandas/core/index.py
index b6eeccba9fe46..337ce3d127170 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -159,7 +159,8 @@ def __bytes__(self):
Invoked by bytes(df) in py3 only.
Yields a bytestring in both py2/py3.
"""
- return com.console_encode(self.__unicode__())
+ encoding = com.get_option("print.encoding")
+ return self.__unicode__().encode(encoding , 'replace')
def __unicode__(self):
"""
@@ -172,7 +173,7 @@ def __unicode__(self):
else:
data = self
- prepr = com.pprint_thing(data)
+ prepr = com.pprint_thing(data, escape_chars=('\t','\r','\n'))
return '%s(%s, dtype=%s)' % (type(self).__name__, prepr, self.dtype)
def __repr__(self):
@@ -422,7 +423,8 @@ def format(self, name=False, formatter=None):
header = []
if name:
- header.append(com.pprint_thing(self.name)
+ header.append(com.pprint_thing(self.name,
+ escape_chars=('\t','\r','\n'))
if self.name is not None else '')
if formatter is not None:
@@ -443,7 +445,8 @@ def format(self, name=False, formatter=None):
values = lib.maybe_convert_objects(values, safe=1)
if values.dtype == np.object_:
- result = [com.pprint_thing(x) for x in values]
+ result = [com.pprint_thing(x,escape_chars=('\t','\r','\n'))
+ for x in values]
else:
result = _trim_front(format_array(values, None, justify='left'))
return header + result
@@ -1377,7 +1380,8 @@ def __bytes__(self):
Invoked by bytes(df) in py3 only.
Yields a bytestring in both py2/py3.
"""
- return com.console_encode(self.__unicode__())
+ encoding = com.get_option("print.encoding")
+ return self.__unicode__().encode(encoding , 'replace')
def __unicode__(self):
"""
@@ -1396,7 +1400,7 @@ def __unicode__(self):
else:
values = self.values
- summary = com.pprint_thing(values)
+ summary = com.pprint_thing(values, escape_chars=('\t','\r','\n'))
np.set_printoptions(threshold=options['threshold'])
@@ -1566,7 +1570,8 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
formatted = lev.take(lab).format(formatter=formatter)
else:
# weird all NA case
- formatted = [com.pprint_thing(x) for x in com.take_1d(lev.values, lab)]
+ formatted = [com.pprint_thing(x,escape_chars=('\t','\r','\n'))
+ for x in com.take_1d(lev.values, lab)]
stringified_levels.append(formatted)
result_levels = []
@@ -1574,7 +1579,8 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
level = []
if names:
- level.append(com.pprint_thing(name) if name is not None else '')
+ level.append(com.pprint_thing(name,escape_chars=('\t','\r','\n'))
+ if name is not None else '')
level.extend(np.array(lev, dtype=object))
result_levels.append(level)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index fa469242fa4e3..2b4af6d9a89d7 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -507,7 +507,8 @@ def __bytes__(self):
Invoked by bytes(df) in py3 only.
Yields a bytestring in both py2/py3.
"""
- return com.console_encode(self.__unicode__())
+ encoding = com.get_option("print.encoding")
+ return self.__unicode__().encode(encoding , 'replace')
def __unicode__(self):
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 20c9e3962137e..18ccf4b481777 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -943,7 +943,8 @@ def __bytes__(self):
Invoked by bytes(df) in py3 only.
Yields a bytestring in both py2/py3.
"""
- return com.console_encode(self.__unicode__())
+ encoding = com.get_option("print.encoding")
+ return self.__unicode__().encode(encoding , 'replace')
def __unicode__(self):
"""
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 56991eba9881d..e8be2a5cc9c4c 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -252,7 +252,8 @@ def test_pprint_thing():
# escape embedded tabs in string
# GH #2038
- assert not "\t" in pp_t("a\tb")
+ assert not "\t" in pp_t("a\tb",escape_chars=("\t",))
+
class TestTake(unittest.TestCase):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index ef5bc2973ceac..d7f6dbcc7de32 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2940,6 +2940,11 @@ def test_repr(self):
# no columns or index
self.empty.info(buf=buf)
+ df = DataFrame(["a\n\r\tb"],columns=["a\n\r\td"],index=["a\n\r\tf"])
+ self.assertFalse("\t" in repr(df))
+ self.assertFalse("\r" in repr(df))
+ self.assertFalse("a\n" in repr(df))
+
@slow
def test_repr_big(self):
buf = StringIO()
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 8b82d45ce79e7..52f25e5c69042 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1097,6 +1097,11 @@ def test_repr(self):
rep_str = repr(ser)
self.assert_("Name: 0" in rep_str)
+ ser = Series(["a\n\r\tb"],name=["a\n\r\td"],index=["a\n\r\tf"])
+ self.assertFalse("\t" in repr(ser))
+ self.assertFalse("\r" in repr(ser))
+ self.assertFalse("a\n" in repr(ser))
+
def test_tidy_repr(self):
a=Series([u"\u05d0"]*1000)
a.name= 'title1'
| Escape embedded "\n"\"\r" as well as "\t" in labels/data when producing reprs
related #2368
| https://api.github.com/repos/pandas-dev/pandas/pulls/2492 | 2012-12-11T18:08:22Z | 2012-12-11T18:40:47Z | 2012-12-11T18:40:47Z | 2014-06-12T14:14:43Z |
ENH: pytables support for Panel4D Tables (AppendableNDimTable) | diff --git a/doc/source/io.rst b/doc/source/io.rst
index d123e8fcbc8a1..653b01f1011ca 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -915,8 +915,8 @@ a subset of the data. This allows one to have a very large on-disk table and ret
A query is specified using the ``Term`` class under the hood.
- - 'index' and 'column' are supported indexers of a DataFrame
- - 'major_axis' and 'minor_axis' are supported indexers of the Panel
+ - 'index' and 'columns' are supported indexers of a DataFrame
+ - 'major_axis', 'minor_axis', and 'items' are supported indexers of the Panel
Valid terms can be created from ``dict, list, tuple, or string``. Objects can be embeded as values. Allowed operations are: ``<, <=, >, >=, =``. ``=`` will be inferred as an implicit set operation (e.g. if 2 or more values are provided). The following are all valid terms.
@@ -925,7 +925,7 @@ Valid terms can be created from ``dict, list, tuple, or string``. Objects can be
- ``'index>20121114'``
- ``('index', '>', datetime(2012,11,14))``
- ``('index', ['20121114','20121115'])``
- - ``('major', '=', Timestamp('2012/11/14'))``
+ - ``('major_axis', '=', Timestamp('2012/11/14'))``
- ``('minor_axis', ['A','B'])``
Queries are built up using a list of ``Terms`` (currently only **anding** of terms is supported). An example query for a panel might be specified as follows.
@@ -934,6 +934,7 @@ Queries are built up using a list of ``Terms`` (currently only **anding** of ter
.. ipython:: python
store.append('wp',wp)
+ store
store.select('wp',[ 'major_axis>20000102', ('minor_axis', '=', ['A','B']) ])
Delete from a Table
@@ -941,7 +942,7 @@ Delete from a Table
.. ipython:: python
- store.remove('wp', 'index>20000102' )
+ store.remove('wp', 'major_axis>20000102' )
store.select('wp')
Notes & Caveats
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 7ad9147faa955..1848ec8e7a2c3 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -759,3 +759,37 @@ def block2d_to_block3d(values, items, shape, major_labels, minor_labels,
ref_items = items
return make_block(pvalues, items, ref_items)
+
+def block2d_to_blocknd(values, items, shape, labels, ref_items=None):
+ """ pivot to the labels shape """
+ from pandas.core.internals import make_block
+ panel_shape = (len(items),) + shape
+
+ # TODO: lexsort depth needs to be 2!!
+
+ # Create observation selection vector using major and minor
+ # labels, for converting to panel format.
+ selector = factor_indexer(shape[1:],labels)
+ mask = np.zeros(np.prod(shape), dtype=bool)
+ mask.put(selector, True)
+
+ pvalues = np.empty(panel_shape, dtype=values.dtype)
+ if not issubclass(pvalues.dtype.type, (np.integer, np.bool_)):
+ pvalues.fill(np.nan)
+ elif not mask.all():
+ pvalues = com._maybe_upcast(pvalues)
+ pvalues.fill(np.nan)
+
+ values = values
+ for i in xrange(len(items)):
+ pvalues[i].flat[mask] = values[:, i]
+
+ if ref_items is None:
+ ref_items = items
+
+ return make_block(pvalues, items, ref_items)
+
+def factor_indexer(shape, labels):
+ """ given a tuple of shape and a list of Factor lables, return the expanded label indexer """
+ mult = np.array(shape)[::-1].cumprod()[::-1]
+ return np.sum(np.array(labels).T * np.append(mult,[1]), axis=1).T
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 2e75d3f067b3a..86627563854b3 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -13,7 +13,7 @@
import numpy as np
from pandas import (
- Series, TimeSeries, DataFrame, Panel, Index, MultiIndex, Int64Index
+ Series, TimeSeries, DataFrame, Panel, Panel4D, Index, MultiIndex, Int64Index
)
from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel
from pandas.sparse.array import BlockIndex, IntIndex
@@ -24,7 +24,7 @@
from pandas.core.categorical import Factor
from pandas.core.common import _asarray_tuplesafe, _try_sort
from pandas.core.internals import BlockManager, make_block, form_blocks
-from pandas.core.reshape import block2d_to_block3d
+from pandas.core.reshape import block2d_to_block3d, block2d_to_blocknd, factor_indexer
import pandas.core.common as com
from pandas.tools.merge import concat
@@ -42,6 +42,7 @@
DataFrame: 'frame',
SparseDataFrame: 'sparse_frame',
Panel: 'wide',
+ Panel4D : 'ndim',
SparsePanel: 'sparse_panel'
}
@@ -632,10 +633,19 @@ def _write_wide_table(self, group, panel, append=False, comp=None, **kwargs):
t.write(axes_to_index=[1,2], obj=panel,
append=append, compression=comp, **kwargs)
+ def _write_ndim_table(self, group, obj, append=False, comp=None, axes_to_index=None, **kwargs):
+ if axes_to_index is None:
+ axes_to_index=[1,2,3]
+ t = create_table(self, group, typ = 'appendable_ndim')
+ t.write(axes_to_index=axes_to_index, obj=obj,
+ append=append, compression=comp, **kwargs)
+
def _read_wide_table(self, group, where=None):
t = create_table(self, group)
return t.read(where)
+ _read_ndim_table = _read_wide_table
+
def _write_index(self, group, key, index):
if isinstance(index, MultiIndex):
setattr(group._v_attrs, '%s_variety' % key, 'multi')
@@ -1098,6 +1108,7 @@ class Table(object):
"""
table_type = None
+ obj_type = None
ndim = None
def __init__(self, parent, group):
@@ -1108,13 +1119,17 @@ def __init__(self, parent, group):
self.values_axes = []
self.selection = None
+ @property
+ def table_type_short(self):
+ return self.table_type.split('_')[0]
+
@property
def pandas_type(self):
return getattr(self.group._v_attrs,'pandas_type',None)
def __repr__(self):
""" return a pretty representatgion of myself """
- return "%s (typ->%s,nrows->%s)" % (self.pandas_type,self.table_type,self.nrows)
+ return "%s (typ->%s,nrows->%s)" % (self.pandas_type,self.table_type_short,self.nrows)
__str__ = __repr__
@@ -1163,9 +1178,9 @@ def description(self):
def axes(self):
return itertools.chain(self.index_axes, self.values_axes)
- def kinds_map(self):
- """ return a list of the kinds for each columns """
- return [ (a.cname,a.kind) for a in self.index_axes ]
+ def queryables(self):
+ """ return a dict of the kinds allowable columns for this object """
+ return dict([ (a.cname,a.kind) for a in self.index_axes ] + [ (self.obj_type._AXIS_NAMES[axis],None) for axis, values in self.non_index_axes ])
def index_cols(self):
""" return a list of my index cols """
@@ -1386,38 +1401,37 @@ def write(self, **kwargs):
raise Exception("write operations are not allowed on legacy tables!")
def read(self, where=None):
- """ we have 2 indexable columns, with an arbitrary number of data axes """
+ """ we have n indexable columns, with an arbitrary number of data axes """
self.read_axes(where)
- index = self.index_axes[0].values
- column = self.index_axes[1].values
-
- major = Factor.from_array(index)
- minor = Factor.from_array(column)
+ indicies = [ i.values for i in self.index_axes ]
+ factors = [ Factor.from_array(i) for i in indicies ]
+ levels = [ f.levels for f in factors ]
+ N = [ len(f.levels) for f in factors ]
+ labels = [ f.labels for f in factors ]
- J, K = len(major.levels), len(minor.levels)
- key = major.labels * K + minor.labels
+ # compute the key
+ key = factor_indexer(N[1:], labels)
- panels = []
+ objs = []
if len(unique(key)) == len(key):
- sorter, _ = algos.groupsort_indexer(com._ensure_int64(key), J * K)
+
+ sorter, _ = algos.groupsort_indexer(com._ensure_int64(key), np.prod(N))
sorter = com._ensure_platform_int(sorter)
- # create the panels
+ # create the objs
for c in self.values_axes:
# the data need to be sorted
sorted_values = c.data.take(sorter, axis=0)
- major_labels = major.labels.take(sorter)
- minor_labels = minor.labels.take(sorter)
- items = Index(c.values)
-
- block = block2d_to_block3d(sorted_values, items, (J, K),
- major_labels, minor_labels)
+
+ take_labels = [ l.take(sorter) for l in labels ]
+ items = Index(c.values)
- mgr = BlockManager([block], [items, major.levels, minor.levels])
- panels.append(Panel(mgr))
+ block = block2d_to_blocknd(sorted_values, items, tuple(N), take_labels)
+ mgr = BlockManager([block], [items] + levels)
+ objs.append(self.obj_type(mgr))
else:
if not self._quiet: # pragma: no cover
@@ -1425,9 +1439,8 @@ def read(self, where=None):
'appended')
# reconstruct
- long_index = MultiIndex.from_arrays([index, column])
+ long_index = MultiIndex.from_arrays(indicies)
- panels = []
for c in self.values_axes:
lp = DataFrame(c.data, index=long_index, columns=c.values)
@@ -1444,10 +1457,10 @@ def read(self, where=None):
new_values = lp.values.take(indexer, axis=0)
lp = DataFrame(new_values, index=new_index, columns=lp.columns)
- panels.append(lp.to_panel())
+ objs.append(lp.to_panel())
- # append the panels
- wp = concat(panels, axis = 0, verify_integrity = True)
+ # create the composite object
+ wp = concat(objs, axis = 0, verify_integrity = True)
# reorder by any non_index_axes
for axis,labels in self.non_index_axes:
@@ -1462,12 +1475,14 @@ def read(self, where=None):
class LegacyFrameTable(LegacyTable):
""" support the legacy frame table """
table_type = 'legacy_frame'
+ obj_type = Panel
def read(self, *args, **kwargs):
return super(LegacyFrameTable, self).read(*args, **kwargs)['value']
class LegacyPanelTable(LegacyTable):
""" support the legacy panel table """
table_type = 'legacy_panel'
+ obj_type = Panel
class AppendableTable(LegacyTable):
""" suppor the new appendable table formats """
@@ -1586,6 +1601,7 @@ class AppendableFrameTable(AppendableTable):
""" suppor the new appendable table formats """
table_type = 'appendable_frame'
ndim = 2
+ obj_type = DataFrame
def read(self, where=None):
@@ -1620,11 +1636,19 @@ class AppendablePanelTable(AppendableTable):
""" suppor the new appendable table formats """
table_type = 'appendable_panel'
ndim = 3
+ obj_type = Panel
+
+class AppendableNDimTable(AppendablePanelTable):
+ """ suppor the new appendable table formats """
+ table_type = 'appendable_ndim'
+ ndim = 4
+ obj_type = Panel4D
# table maps
_TABLE_MAP = {
'appendable_frame' : AppendableFrameTable,
'appendable_panel' : AppendablePanelTable,
+ 'appendable_ndim' : AppendableNDimTable,
'worm' : WORMTable,
'legacy_frame' : LegacyFrameTable,
'legacy_panel' : LegacyPanelTable,
@@ -1818,7 +1842,7 @@ class Term(object):
op : a valid op (defaults to '=') (optional)
>, >=, <, <=, =, != (not equal) are allowed
value : a value or list of values (required)
- kinds : the kinds map (dict of column name -> kind)
+ queryables : a kinds map (dict of column name -> kind), or None i column is non-indexable
Returns
-------
@@ -1831,24 +1855,19 @@ class Term(object):
Term('index', '>', '20121114')
Term('index', ['20121114','20121114'])
Term('index', datetime(2012,11,14))
- Term('major>20121114')
- Term('minor', ['A','B'])
+ Term('major_axis>20121114')
+ Term('minor_axis', ['A','B'])
"""
_ops = ['<=','<','>=','>','!=','=']
_search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
- _index = ['index','major_axis','major']
- _column = ['column','columns','minor_axis','minor']
- def __init__(self, field, op = None, value = None, kinds = None):
+ def __init__(self, field, op = None, value = None, queryables = None):
self.field = None
self.op = None
self.value = None
-
- if kinds is None:
- kinds = []
- self.kinds = dict(kinds)
+ self.q = queryables or dict()
self.filter = None
self.condition = None
@@ -1901,12 +1920,6 @@ def __init__(self, field, op = None, value = None, kinds = None):
if self.field is None or self.op is None or self.value is None:
raise Exception("Could not create this term [%s]" % str(self))
- # map alias for field names
- if self.field in self._index and len(kinds) > 0:
- self.field = kinds[0][0]
- elif self.field in self._column and len(kinds) > 1:
- self.field = kinds[1][0]
-
# we have valid conditions
if self.op in ['>','>=','<','<=']:
if hasattr(self.value,'__iter__') and len(self.value) > 1:
@@ -1915,26 +1928,35 @@ def __init__(self, field, op = None, value = None, kinds = None):
if not hasattr(self.value,'__iter__'):
self.value = [ self.value ]
- self.eval()
+ if len(self.q):
+ self.eval()
def __str__(self):
return "field->%s,op->%s,value->%s" % (self.field,self.op,self.value)
__repr__ = __str__
+ @property
+ def is_valid(self):
+ """ return True if this is a valid field """
+ return self.field in self.q
+
@property
def is_in_table(self):
""" return True if this is a valid column name for generation (e.g. an actual column in the table) """
- return self.field in self.kinds
+ return self.q.get(self.field) is not None
@property
def kind(self):
""" the kind of my field """
- return self.kinds.get(self.field)
+ return self.q.get(self.field)
def eval(self):
""" set the numexpr expression for this term """
+ if not self.is_valid:
+ raise Exception("query term is not valid [%s]" % str(self))
+
# convert values
values = [ self.convert_value(v) for v in self.value ]
@@ -2014,7 +2036,8 @@ def generate(self, where):
if not any([ isinstance(w, (list,tuple,Term)) for w in where ]):
where = [ where ]
- return [ Term(c, kinds = self.table.kinds_map()) for c in where ]
+ queryables = self.table.queryables()
+ return [ Term(c, queryables = queryables) for c in where ]
def select(self):
"""
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 6f7d348caa266..7ecb0bc2fd5ee 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -185,11 +185,18 @@ def test_append(self):
tm.assert_frame_equal(store['df3 foo'], df)
warnings.filterwarnings('always', category=tables.NaturalNameWarning)
+ # panel
wp = tm.makePanel()
store.append('wp1', wp.ix[:,:10,:])
store.append('wp1', wp.ix[:,10:,:])
tm.assert_panel_equal(store['wp1'], wp)
+ # ndim
+ p4d = tm.makePanel4D()
+ store.append('p4d', p4d.ix[:,:,:10,:])
+ store.append('p4d', p4d.ix[:,:,10:,:])
+ tm.assert_panel4d_equal(store['p4d'], p4d)
+
except:
raise
finally:
@@ -351,7 +358,7 @@ def test_remove_where(self):
# non-table ok (where = None)
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
- self.store.remove('wp', [('column', ['A', 'D'])])
+ self.store.remove('wp', [('minor_axis', ['A', 'D'])])
rs = self.store.select('wp')
expected = wp.reindex(minor_axis = ['B','C'])
tm.assert_panel_equal(rs,expected)
@@ -378,8 +385,8 @@ def test_remove_crit(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = Term('index','>',date)
- crit2 = Term('column',['A', 'D'])
+ crit1 = Term('major_axis','>',date)
+ crit2 = Term('minor_axis',['A', 'D'])
self.store.remove('wp', where=[crit1])
self.store.remove('wp', where=[crit2])
result = self.store['wp']
@@ -394,9 +401,9 @@ def test_remove_crit(self):
date2 = wp.major_axis[5]
date3 = [wp.major_axis[7],wp.major_axis[9]]
- crit1 = Term('index',date1)
- crit2 = Term('index',date2)
- crit3 = Term('index',date3)
+ crit1 = Term('major_axis',date1)
+ crit2 = Term('major_axis',date2)
+ crit3 = Term('major_axis',date3)
self.store.remove('wp2', where=[crit1])
self.store.remove('wp2', where=[crit2])
@@ -415,7 +422,9 @@ def test_remove_crit(self):
def test_terms(self):
wp = tm.makePanel()
+ p4d = tm.makePanel4D()
self.store.put('wp', wp, table=True)
+ self.store.put('p4d', p4d, table=True)
# some invalid terms
terms = [
@@ -432,28 +441,34 @@ def test_terms(self):
self.assertRaises(Exception, Term.__init__, 'index', '==')
self.assertRaises(Exception, Term.__init__, 'index', '>', 5)
+ # panel
result = self.store.select('wp',[ Term('major_axis<20000108'), Term('minor_axis', '=', ['A','B']) ])
expected = wp.truncate(after='20000108').reindex(minor=['A', 'B'])
tm.assert_panel_equal(result, expected)
+ # p4d
+ result = self.store.select('p4d',[ Term('major_axis<20000108'), Term('minor_axis', '=', ['A','B']) ])
+ expected = p4d.truncate(after='20000108').reindex(minor=['A', 'B'])
+ tm.assert_panel4d_equal(result, expected)
+
# valid terms
terms = [
- dict(field = 'index', op = '>', value = '20121114'),
- ('index', '20121114'),
- ('index', '>', '20121114'),
- (('index', ['20121114','20121114']),),
- ('index', datetime(2012,11,14)),
- 'index>20121114',
- 'major>20121114',
+ dict(field = 'major_axis', op = '>', value = '20121114'),
+ ('major_axis', '20121114'),
+ ('major_axis', '>', '20121114'),
+ (('major_axis', ['20121114','20121114']),),
+ ('major_axis', datetime(2012,11,14)),
+ 'major_axis>20121114',
'major_axis>20121114',
- (('minor', ['A','B']),),
+ 'major_axis>20121114',
+ (('minor_axis', ['A','B']),),
(('minor_axis', ['A','B']),),
((('minor_axis', ['A','B']),),),
- (('column', ['A','B']),),
]
for t in terms:
self.store.select('wp', t)
+ self.store.select('p4d', t)
def test_series(self):
s = tm.makeStringSeries()
@@ -790,8 +805,8 @@ def test_panel_select(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = ('index','>=',date)
- crit2 = ('column', '=', ['A', 'D'])
+ crit1 = ('major_axis','>=',date)
+ crit2 = ('minor_axis', '=', ['A', 'D'])
result = self.store.select('wp', [crit1, crit2])
expected = wp.truncate(before=date).reindex(minor=['A', 'D'])
@@ -807,8 +822,8 @@ def test_frame_select(self):
date = df.index[len(df) // 2]
crit1 = ('index','>=',date)
- crit2 = ('column',['A', 'D'])
- crit3 = ('column','A')
+ crit2 = ('columns',['A', 'D'])
+ crit3 = ('columns','A')
result = self.store.select('frame', [crit1, crit2])
expected = df.ix[date:, ['A', 'D']]
@@ -829,7 +844,7 @@ def test_select_filter_corner(self):
df.columns = ['%.3d' % c for c in df.columns]
self.store.put('frame', df, table=True)
- crit = Term('column', df.columns[:75])
+ crit = Term('columns', df.columns[:75])
result = self.store.select('frame', [crit])
tm.assert_frame_equal(result, df.ix[:, df.columns[:75]])
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index e776eb9df7265..6570ce7abb0ae 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -825,6 +825,56 @@ def create_hdf_rows_3d(ndarray indexer0, ndarray indexer1,
return l
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def create_hdf_rows_4d(ndarray indexer0, ndarray indexer1, ndarray indexer2,
+ ndarray[np.uint8_t, ndim=3] mask, list values):
+ """ return a list of objects ready to be converted to rec-array format """
+
+ cdef:
+ unsigned int i, j, k, b, n_indexer0, n_indexer1, n_indexer2, n_blocks, tup_size
+ ndarray v
+ list l
+ object tup, val
+
+ n_indexer0 = indexer0.shape[0]
+ n_indexer1 = indexer1.shape[0]
+ n_indexer2 = indexer2.shape[0]
+ n_blocks = len(values)
+ tup_size = n_blocks+3
+ l = []
+ for i from 0 <= i < n_indexer0:
+
+ for j from 0 <= j < n_indexer1:
+
+ for k from 0 <= k < n_indexer2:
+
+ if not mask[i, j, k]:
+
+ tup = PyTuple_New(tup_size)
+
+ val = indexer0[i]
+ PyTuple_SET_ITEM(tup, 0, val)
+ Py_INCREF(val)
+
+ val = indexer1[j]
+ PyTuple_SET_ITEM(tup, 1, val)
+ Py_INCREF(val)
+
+ val = indexer2[k]
+ PyTuple_SET_ITEM(tup, 2, val)
+ Py_INCREF(val)
+
+ for b from 0 <= b < n_blocks:
+
+ v = values[b][:, i, j, k]
+ PyTuple_SET_ITEM(tup, b+3, v)
+ Py_INCREF(v)
+
+ l.append(tup)
+
+ return l
+
#-------------------------------------------------------------------------------
# Groupby-related functions
| ENH: AppendableNDimTable (supporting Panel4D)
Term: only allow term querys that have matching axes
e.g. index/columns supported for DataFrames, items,major_axis,minor_axis for Panels
| https://api.github.com/repos/pandas-dev/pandas/pulls/2484 | 2012-12-11T03:01:25Z | 2012-12-11T18:43:15Z | null | 2014-07-09T07:04:52Z |
BUG: merge tests for panel_4d - mixed_type is failing (others ok) | diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 3964a04d21433..ec8212db5afcb 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -1427,6 +1427,36 @@ def df():
# it works!
concat([ panel1, panel3 ], axis = 1, verify_integrity = True)
+ def test_panel4d_concat(self):
+ p4d = tm.makePanel4D()
+
+ p1 = p4d.ix[:, :, :5, :]
+ p2 = p4d.ix[:, :, 5:, :]
+
+ result = concat([p1, p2], axis=2)
+ tm.assert_panel4d_equal(result, p4d)
+
+ p1 = p4d.ix[:, :, :, :2]
+ p2 = p4d.ix[:, :, :, 2:]
+
+ result = concat([p1, p2], axis=3)
+ tm.assert_panel4d_equal(result, p4d)
+
+ def test_panel4d_concat_mixed_type(self):
+ p4d = tm.makePanel4D()
+
+ # if things are a bit misbehaved
+ p1 = p4d.ix[:, :2, :, :2]
+ p2 = p4d.ix[:, :, :, 2:]
+ p1['L5'] = 'baz'
+
+ result = concat([p1, p2], axis=3)
+
+ expected = p4d.copy()
+ expected['L5'] = expected['L5'].astype('O')
+ expected.ix['L5', :, :, :2] = 'baz'
+ tm.assert_panel4d_equal(result, expected)
+
def test_concat_series(self):
ts = tm.makeTimeSeries()
ts.name = 'foo'
| wes...if you have a moment....mixed type concentation (with p4d) is broken...not excactly sure how this works....
| https://api.github.com/repos/pandas-dev/pandas/pulls/2483 | 2012-12-11T03:00:26Z | 2012-12-11T20:15:53Z | null | 2014-06-12T14:50:46Z |
mysql support | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 0caf83838fb2d..0c1e5ee1f7ca3 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -173,10 +173,10 @@ def write_frame(frame, name, con, flavor='sqlite', if_exists='fail', **kwargs):
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if does not exist.
"""
-
+
if 'append' in kwargs:
import warnings
- warnings.warn("append is deprecated, use if_exists instead",
+ warnings.warn("append is deprecated, use if_exists instead",
FutureWarning)
if kwargs['append']:
if_exists='append'
@@ -195,7 +195,12 @@ def write_frame(frame, name, con, flavor='sqlite', if_exists='fail', **kwargs):
if create is not None:
cur = con.cursor()
+<<<<<<< HEAD
cur.execute(create)
+=======
+ create_table = get_schema(frame, name, flavor)
+ cur.execute(create_table)
+>>>>>>> Restored append keyword with a FutureWarning
cur.close()
cur = con.cursor()
@@ -241,17 +246,28 @@ def table_exists(name, con, flavor):
def get_sqltype(pytype, flavor):
sqltype = {'mysql': 'VARCHAR (63)',
- 'sqlite': 'TEXT'}
-
- if issubclass(pytype, np.floating):
+ 'oracle': 'VARCHAR2',
+ 'sqlite': 'TEXT',
+ 'postgresql': 'TEXT'}
+ if issubclass(pytype, np.number):
sqltype['mysql'] = 'FLOAT'
sqltype['sqlite'] = 'REAL'
+<<<<<<< HEAD
if issubclass(pytype, np.integer):
#TODO: Refine integer size.
sqltype['mysql'] = 'BIGINT'
sqltype['sqlite'] = 'INTEGER'
+=======
+ sqltype['postgresql'] = 'NUMBER'
+ if issubclass(pytype, np.integer):
+ #TODO: Refine integer size.
+ sqltype['mysql'] = 'BIGINT'
+ sqltype['oracle'] = 'PLS_INTEGER'
+ sqltype['sqlite'] = 'INTEGER'
+ sqltype['postgresql'] = 'INTEGER'
+>>>>>>> Restored append keyword with a FutureWarning
if issubclass(pytype, np.datetime64) or pytype is datetime:
# Caution: np.datetime64 is also a subclass of np.number.
sqltype['mysql'] = 'DATETIME'
@@ -276,7 +292,10 @@ def get_schema(frame, name, flavor, keys=None):
columns = ',\n '.join('[%s] %s' % x for x in column_types)
else:
columns = ',\n '.join('`%s` %s' % x for x in column_types)
+<<<<<<< HEAD
+=======
+>>>>>>> added test keyword_as_column_names
keystr = ''
if keys is not None:
if isinstance(keys, basestring):
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index b443c55f97b8d..4e4fd6141fc83 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -27,6 +27,40 @@
bool: lambda x: "'%s'" % x,
}
+def format_query(sql, *args):
+ """
+
+ """
+ processed_args = []
+ for arg in args:
+ if isinstance(arg, float) and isnull(arg):
+ arg = None
+
+ formatter = _formatters[type(arg)]
+ processed_args.append(formatter(arg))
+
+ return sql % tuple(processed_args)
+
+def _skip_if_no_MySQLdb():
+ try:
+ import MySQLdb
+ except ImportError:
+ raise nose.SkipTest('MySQLdb not installed, skipping')
+
+from datetime import datetime
+
+_formatters = {
+ datetime: lambda dt: "'%s'" % date_format(dt),
+ str: lambda x: "'%s'" % x,
+ np.str_: lambda x: "'%s'" % x,
+ unicode: lambda x: "'%s'" % x,
+ float: lambda x: "%.8f" % x,
+ int: lambda x: "%s" % x,
+ type(None): lambda x: "NULL",
+ np.float64: lambda x: "%.10f" % x,
+ bool: lambda x: "'%s'" % x,
+}
+
def format_query(sql, *args):
"""
@@ -219,19 +253,18 @@ def test_keyword_as_column_names(self):
df = DataFrame({'From':np.ones(5)})
sql.write_frame(df, con = self.db, name = 'testkeywords')
-
class TestMySQL(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
try:
- import MySQLdb
+ import MySQLdb
except ImportError:
raise nose.SkipTest
try:
self.db = MySQLdb.connect(read_default_group='pandas')
except MySQLdb.Error, e:
raise nose.SkipTest(
- "Cannot connect to database. "
+ "Cannot connect to database. "
"Create a group of connection parameters under the heading "
"[pandas] in your system's mysql default file, "
"typically located at ~/.my.cnf or /etc/.my.cnf. ")
@@ -262,6 +295,7 @@ def test_write_row_by_row(self):
self.db.commit()
+>>>>>>> added mysql nosetests and made them optional
result = sql.read_frame("select * from test", con=self.db)
result.index = frame.index
tm.assert_frame_equal(result, frame)
@@ -388,7 +422,7 @@ def _check_roundtrip(self, frame):
def test_tquery(self):
try:
- import MySQLdb
+ import MySQLdb
except ImportError:
raise nose.SkipTest
frame = tm.makeTimeDataFrame()
@@ -413,7 +447,7 @@ def test_tquery(self):
def test_uquery(self):
try:
- import MySQLdb
+ import MySQLdb
except ImportError:
raise nose.SkipTest
frame = tm.makeTimeDataFrame()
@@ -441,7 +475,7 @@ def test_keyword_as_column_names(self):
'''
_skip_if_no_MySQLdb()
df = DataFrame({'From':np.ones(5)})
- sql.write_frame(df, con = self.db, name = 'testkeywords',
+ sql.write_frame(df, con = self.db, name = 'testkeywords',
if_exists='replace', flavor='mysql')
| I added mysql support and (untested!) support for oracle. Some parts of the code are ready to support other flavors -- it should be easy to extend.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2482 | 2012-12-10T21:46:53Z | 2013-07-08T20:07:15Z | null | 2022-10-13T00:14:45Z |
BUG: create correctly named indexables in HDFStore Tables | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 8af7151cb898c..2e75d3f067b3a 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -855,33 +855,40 @@ def _read_frame_table(self, group, where=None):
return t.read(where)
-class Col(object):
- """ a column description class
+class IndexCol(object):
+ """ an index column description class
Parameters
----------
+ axis : axis which I reference
values : the ndarray like converted values
kind : a string description of this type
typ : the pytables type
+ pos : the position in the pytables
"""
is_indexable = True
- def __init__(self, values = None, kind = None, typ = None, cname = None, itemsize = None, name = None, kind_attr = None, **kwargs):
+ def __init__(self, values = None, kind = None, typ = None, cname = None, itemsize = None, name = None, axis = None, kind_attr = None, pos = None, **kwargs):
self.values = values
self.kind = kind
self.typ = typ
self.itemsize = itemsize
- self.name = None
+ self.name = name
self.cname = cname
- self.kind_attr = None
+ self.kind_attr = kind_attr
+ self.axis = axis
+ self.pos = pos
self.table = None
if name is not None:
self.set_name(name, kind_attr)
+ if pos is not None:
+ self.set_pos(pos)
def set_name(self, name, kind_attr = None):
+ """ set the name of this indexer """
self.name = name
self.kind_attr = kind_attr or "%s_kind" % name
if self.cname is None:
@@ -889,12 +896,25 @@ def set_name(self, name, kind_attr = None):
return self
+ def set_axis(self, axis):
+ """ set the axis over which I index """
+ self.axis = axis
+
+ return self
+
+ def set_pos(self, pos):
+ """ set the position of this column in the Table """
+ self.pos = pos
+ if pos is not None and self.typ is not None:
+ self.typ._v_pos = pos
+ return self
+
def set_table(self, table):
self.table = table
return self
def __repr__(self):
- return "name->%s,cname->%s,kind->%s" % (self.name,self.cname,self.kind)
+ return "name->%s,cname->%s,axis->%s,pos->%s,kind->%s" % (self.name,self.cname,self.axis,self.pos,self.kind)
__str__ = __repr__
@@ -921,11 +941,6 @@ def attrs(self):
def description(self):
return self.table.description
- @property
- def pos(self):
- """ my column position """
- return getattr(self.col,'_v_pos',None)
-
@property
def col(self):
""" return my current col description """
@@ -948,7 +963,7 @@ def maybe_set_size(self, min_itemsize = None, **kwargs):
min_itemsize = min_itemsize.get(self.name)
if min_itemsize is not None and self.typ.itemsize < min_itemsize:
- self.typ = _tables().StringCol(itemsize = min_itemsize, pos = getattr(self.typ,'pos',None))
+ self.typ = _tables().StringCol(itemsize = min_itemsize, pos = self.pos)
def validate_and_set(self, table, append, **kwargs):
self.set_table(table)
@@ -984,7 +999,7 @@ def set_attr(self):
""" set the kind for this colummn """
setattr(self.attrs,self.kind_attr,self.kind)
-class DataCol(Col):
+class DataCol(IndexCol):
""" a data holding column, by definition this is not indexable
Parameters
@@ -1072,10 +1087,18 @@ class Table(object):
parent : my parent HDFStore
group : the group node where the table resides
+ Attrs in Table Node
+ -------------------
+ These are attributes that are store in the main table node, they are necessary
+ to recreate these tables when read back in.
+
+ index_axes: a list of tuples of the (original indexing axis and index column)
+ non_index_axes: a list of tuples of the (original index axis and columns on a non-indexing axis)
+ values_axes : a list of the columns which comprise the data of this table
+
"""
table_type = None
ndim = None
- axis_names = ['index','column']
def __init__(self, parent, group):
self.parent = parent
@@ -1083,7 +1106,7 @@ def __init__(self, parent, group):
self.index_axes = []
self.non_index_axes = []
self.values_axes = []
- self.selection = None
+ self.selection = None
@property
def pandas_type(self):
@@ -1136,22 +1159,17 @@ def attrs(self):
def description(self):
return self.table.description
- @property
- def is_transpose(self):
- """ does my data need transposition """
- return False
-
@property
def axes(self):
return itertools.chain(self.index_axes, self.values_axes)
def kinds_map(self):
- """ return a diction of columns -> kinds """
- return dict([ (a.cname,a.kind) for a in self.axes ])
+ """ return a list of the kinds for each columns """
+ return [ (a.cname,a.kind) for a in self.index_axes ]
def index_cols(self):
""" return a list of my index cols """
- return [ i.cname for i in self.index_axes ]
+ return [ (i.axis,i.cname) for i in self.index_axes ]
def values_cols(self):
""" return a list of my values cols """
@@ -1184,10 +1202,11 @@ def indexables(self):
self._indexables = []
# index columns
- self._indexables.extend([ Col(name = i) for i in self.attrs.index_cols ])
+ self._indexables.extend([ IndexCol(name = name, axis = axis, pos = i) for i, (axis, name) in enumerate(self.attrs.index_cols) ])
# data columns
- self._indexables.extend([ DataCol.create_for_block(i = i) for i, c in enumerate(self.attrs.values_cols) ])
+ base_pos = len(self._indexables)
+ self._indexables.extend([ DataCol.create_for_block(i = i, pos = base_pos + i ) for i, c in enumerate(self.attrs.values_cols) ])
return self._indexables
@@ -1199,7 +1218,7 @@ def create_index(self, columns = None, optlevel = None, kind = None):
Paramaters
----------
- columns : None or list_like (the columns to index - currently supports index/column)
+ columns : None or list_like (the indexers to index)
optlevel: optimization level (defaults to 6)
kind : kind of index (defaults to 'medium')
@@ -1212,8 +1231,10 @@ def create_index(self, columns = None, optlevel = None, kind = None):
table = self.table
if table is None: return
+ self.infer_axes()
+
if columns is None:
- columns = ['index']
+ columns = [ self.index_axes[0].name ]
if not isinstance(columns, (tuple,list)):
columns = [ columns ]
@@ -1253,15 +1274,18 @@ def create_axes(self, axes_to_index, obj, validate = True, min_itemsize = None):
"""
- self.index_axes = []
+ self.index_axes = []
self.non_index_axes = []
# create axes to index and non_index
j = 0
for i, a in enumerate(obj.axes):
+
if i in axes_to_index:
- self.index_axes.append(_convert_index(a).set_name(self.axis_names[j]))
+ name = obj._AXIS_NAMES[i]
+ self.index_axes.append(_convert_index(a).set_name(name).set_axis(i).set_pos(j))
j += 1
+
else:
self.non_index_axes.append((i,list(a)))
@@ -1289,7 +1313,8 @@ def create_axes(self, axes_to_index, obj, validate = True, min_itemsize = None):
except (Exception), detail:
raise Exception("cannot coerce data type -> [dtype->%s]" % b.dtype.name)
- dc = DataCol.create_for_block(i = i, values = list(b.items), kind = b.dtype.name, typ = atom, data = values)
+ dc = DataCol.create_for_block(i = i, values = list(b.items), kind = b.dtype.name, typ = atom, data = values, pos = j)
+ j += 1
self.values_axes.append(dc)
def create_description(self, compression = None, complevel = None):
@@ -1352,7 +1377,9 @@ class LegacyTable(Table):
that can be easily searched
"""
- _indexables = [Col(name = 'index'),Col(name = 'column', index_kind = 'columns_kind'), DataCol(name = 'fields', cname = 'values', kind_attr = 'fields') ]
+ _indexables = [IndexCol(name = 'index', axis = 0, pos = 0),
+ IndexCol(name = 'column', axis = 1, pos = 1, index_kind = 'columns_kind'),
+ DataCol( name = 'fields', cname = 'values', kind_attr = 'fields', pos = 2) ]
table_type = 'legacy'
def write(self, **kwargs):
@@ -1482,10 +1509,10 @@ def write(self, axes_to_index, obj, append=False, compression=None,
a.validate_and_set(table, append)
# add the rows
- self._write_data()
+ self.write_data()
self.handle.flush()
- def _write_data(self):
+ def write_data(self):
""" fast writing of data: requires specific cython routines each axis shape """
masks = []
@@ -1632,10 +1659,10 @@ def create_table(parent, group, typ = None, **kwargs):
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
- return Col(converted, 'datetime64', _tables().Int64Col())
+ return IndexCol(converted, 'datetime64', _tables().Int64Col())
elif isinstance(index, (Int64Index, PeriodIndex)):
atom = _tables().Int64Col()
- return Col(index.values, 'integer', atom)
+ return IndexCol(index.values, 'integer', atom)
if isinstance(index, MultiIndex):
raise Exception('MultiIndex not supported here!')
@@ -1646,36 +1673,36 @@ def _convert_index(index):
if inferred_type == 'datetime64':
converted = values.view('i8')
- return Col(converted, 'datetime64', _tables().Int64Col())
+ return IndexCol(converted, 'datetime64', _tables().Int64Col())
elif inferred_type == 'datetime':
converted = np.array([(time.mktime(v.timetuple()) +
v.microsecond / 1E6) for v in values],
dtype=np.float64)
- return Col(converted, 'datetime', _tables().Time64Col())
+ return IndexCol(converted, 'datetime', _tables().Time64Col())
elif inferred_type == 'date':
converted = np.array([time.mktime(v.timetuple()) for v in values],
dtype=np.int32)
- return Col(converted, 'date', _tables().Time32Col())
+ return IndexCol(converted, 'date', _tables().Time32Col())
elif inferred_type == 'string':
# atom = _tables().ObjectAtom()
# return np.asarray(values, dtype='O'), 'object', atom
converted = np.array(list(values), dtype=np.str_)
itemsize = converted.dtype.itemsize
- return Col(converted, 'string', _tables().StringCol(itemsize), itemsize = itemsize)
+ return IndexCol(converted, 'string', _tables().StringCol(itemsize), itemsize = itemsize)
elif inferred_type == 'unicode':
atom = _tables().ObjectAtom()
- return Col(np.asarray(values, dtype='O'), 'object', atom)
+ return IndexCol(np.asarray(values, dtype='O'), 'object', atom)
elif inferred_type == 'integer':
# take a guess for now, hope the values fit
atom = _tables().Int64Col()
- return Col(np.asarray(values, dtype=np.int64), 'integer', atom)
+ return IndexCol(np.asarray(values, dtype=np.int64), 'integer', atom)
elif inferred_type == 'floating':
atom = _tables().Float64Col()
- return Col(np.asarray(values, dtype=np.float64), 'float', atom)
+ return IndexCol(np.asarray(values, dtype=np.float64), 'float', atom)
else: # pragma: no cover
atom = _tables().ObjectAtom()
- return Col(np.asarray(values, dtype='O'), 'object', atom)
+ return IndexCol(np.asarray(values, dtype='O'), 'object', atom)
def _read_array(group, key):
@@ -1812,13 +1839,16 @@ class Term(object):
_ops = ['<=','<','>=','>','!=','=']
_search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
_index = ['index','major_axis','major']
- _column = ['column','minor_axis','minor']
+ _column = ['column','columns','minor_axis','minor']
def __init__(self, field, op = None, value = None, kinds = None):
self.field = None
self.op = None
self.value = None
- self.kinds = kinds or dict()
+
+ if kinds is None:
+ kinds = []
+ self.kinds = dict(kinds)
self.filter = None
self.condition = None
@@ -1871,13 +1901,11 @@ def __init__(self, field, op = None, value = None, kinds = None):
if self.field is None or self.op is None or self.value is None:
raise Exception("Could not create this term [%s]" % str(self))
- # valid field name
- if self.field in self._index:
- self.field = 'index'
- elif self.field in self._column:
- self.field = 'column'
- else:
- raise Exception("field is not a valid index/column for this term [%s]" % str(self))
+ # map alias for field names
+ if self.field in self._index and len(kinds) > 0:
+ self.field = kinds[0][0]
+ elif self.field in self._column and len(kinds) > 1:
+ self.field = kinds[1][0]
# we have valid conditions
if self.op in ['>','>=','<','<=']:
@@ -1935,7 +1963,8 @@ def eval(self):
def convert_value(self, v):
- if self.field == 'index':
+ #### a little hacky here, need to really figure out what we should convert ####x
+ if self.field == 'index' or self.field == 'major_axis':
if self.kind == 'datetime64' :
return [lib.Timestamp(v).value, None]
elif isinstance(v, datetime):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 2e37e48406e0a..6f7d348caa266 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -207,14 +207,14 @@ def test_append_with_strings(self):
tm.assert_panel_equal(self.store['s1'], expected)
# test dict format
- self.store.append('s2', wp, min_itemsize = { 'column' : 20 })
+ self.store.append('s2', wp, min_itemsize = { 'minor_axis' : 20 })
self.store.append('s2', wp2)
expected = concat([ wp, wp2], axis = 2)
expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
tm.assert_panel_equal(self.store['s2'], expected)
# apply the wrong field (similar to #1)
- self.store.append('s3', wp, min_itemsize = { 'index' : 20 })
+ self.store.append('s3', wp, min_itemsize = { 'major_axis' : 20 })
self.assertRaises(Exception, self.store.append, 's3')
# test truncation of bigger strings
@@ -226,8 +226,8 @@ def test_create_table_index(self):
self.store.append('p5', wp)
self.store.create_table_index('p5')
- assert(self.store.handle.root.p5.table.cols.index.is_indexed == True)
- assert(self.store.handle.root.p5.table.cols.column.is_indexed == False)
+ assert(self.store.handle.root.p5.table.cols.major_axis.is_indexed == True)
+ assert(self.store.handle.root.p5.table.cols.minor_axis.is_indexed == False)
df = tm.makeTimeDataFrame()
self.store.append('f', df[:10])
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index a9d9f76f27358..e776eb9df7265 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -749,26 +749,26 @@ from cpython cimport (PyDict_New, PyDict_GetItem, PyDict_SetItem,
@cython.boundscheck(False)
@cython.wraparound(False)
-def create_hdf_rows_2d(ndarray index, ndarray[np.uint8_t, ndim=1] mask,
+def create_hdf_rows_2d(ndarray indexer0, ndarray[np.uint8_t, ndim=1] mask,
list values):
""" return a list of objects ready to be converted to rec-array format """
cdef:
- unsigned int i, b, n_index, n_blocks, tup_size
+ unsigned int i, b, n_indexer0, n_blocks, tup_size
ndarray v
list l
object tup, val
- n_index = index.shape[0]
- n_blocks = len(values)
- tup_size = n_blocks+1
+ n_indexer0 = indexer0.shape[0]
+ n_blocks = len(values)
+ tup_size = n_blocks+1
l = []
- for i from 0 <= i < n_index:
+ for i from 0 <= i < n_indexer0:
if not mask[i]:
tup = PyTuple_New(tup_size)
- val = index[i]
+ val = indexer0[i]
PyTuple_SET_ITEM(tup, 0, val)
Py_INCREF(val)
@@ -784,40 +784,40 @@ def create_hdf_rows_2d(ndarray index, ndarray[np.uint8_t, ndim=1] mask,
@cython.boundscheck(False)
@cython.wraparound(False)
-def create_hdf_rows_3d(ndarray index, ndarray columns,
+def create_hdf_rows_3d(ndarray indexer0, ndarray indexer1,
ndarray[np.uint8_t, ndim=2] mask, list values):
""" return a list of objects ready to be converted to rec-array format """
cdef:
- unsigned int i, j, n_columns, n_index, n_blocks, tup_size
+ unsigned int i, j, b, n_indexer0, n_indexer1, n_blocks, tup_size
ndarray v
list l
object tup, val
- n_index = index.shape[0]
- n_columns = columns.shape[0]
- n_blocks = len(values)
- tup_size = n_blocks+2
+ n_indexer0 = indexer0.shape[0]
+ n_indexer1 = indexer1.shape[0]
+ n_blocks = len(values)
+ tup_size = n_blocks+2
l = []
- for i from 0 <= i < n_index:
+ for i from 0 <= i < n_indexer0:
- for c from 0 <= c < n_columns:
+ for j from 0 <= j < n_indexer1:
- if not mask[i, c]:
+ if not mask[i, j]:
tup = PyTuple_New(tup_size)
- val = columns[c]
+ val = indexer0[i]
PyTuple_SET_ITEM(tup, 0, val)
Py_INCREF(val)
- val = index[i]
+ val = indexer1[j]
PyTuple_SET_ITEM(tup, 1, val)
Py_INCREF(val)
for b from 0 <= b < n_blocks:
- v = values[b][:, i, c]
+ v = values[b][:, i, j]
PyTuple_SET_ITEM(tup, b+2, v)
Py_INCREF(v)
| - indexable columns were created and named in a legacy format, now named like the
indexable in the object, e.g. 'index' for DataFrame, or 'major_axis'/'minor_axis' for Panel
- fixes an issue if we want to support say a 'column' oriented Table
(e.g. instead of storing the transpose,
store with a 'column' indexables) - not implemented yet, but supported now
| https://api.github.com/repos/pandas-dev/pandas/pulls/2481 | 2012-12-10T19:17:46Z | 2012-12-10T21:28:11Z | null | 2012-12-10T21:28:11Z |
ENH: minor code refactoring to support sub-classes better (in Panel) | diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index fa469242fa4e3..424074de2ca7c 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -96,35 +96,14 @@ def _arith_method(func, name):
def f(self, other):
if not np.isscalar(other):
- raise ValueError('Simple arithmetic with Panel can only be '
- 'done with scalar values')
+ raise ValueError('Simple arithmetic with %s can only be '
+ 'done with scalar values' % self._constructor.__name__)
return self._combine(other, func)
f.__name__ = name
return f
-def _panel_arith_method(op, name):
- @Substitution(op)
- def f(self, other, axis = 0):
- """
- Wrapper method for %s
-
- Parameters
- ----------
- other : DataFrame or Panel class
- axis : {'items', 'major', 'minor'}
- Axis to broadcast over
-
- Returns
- -------
- Panel
- """
- return self._combine(other, op, axis=axis)
-
- f.__name__ = name
- return f
-
def _comp_method(func, name):
def na_op(x, y):
@@ -164,27 +143,6 @@ def f(self, other):
return f
-_agg_doc = """
-Return %(desc)s over requested axis
-
-Parameters
-----------
-axis : {'items', 'major', 'minor'} or {0, 1, 2}
-skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
-
-Returns
--------
-%(outname)s : DataFrame
-"""
-
-_na_info = """
-
-NA/null values are %s.
-If all values are NA, result will be NA"""
-
-
class Panel(NDFrame):
_AXIS_ORDERS = ['items','major_axis','minor_axis']
_AXIS_NUMBERS = dict([ (a,i) for i, a in enumerate(_AXIS_ORDERS) ])
@@ -217,13 +175,24 @@ def _constructor(self):
# return the type of the slice constructor
_constructor_sliced = DataFrame
- def _construct_axes_dict(self, axes = None):
+ def _construct_axes_dict(self, axes = None, **kwargs):
""" return an axes dictionary for myself """
- return dict([ (a,getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+ d = dict([ (a,getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+ d.update(kwargs)
+ return d
- def _construct_axes_dict_for_slice(self, axes = None):
+ @staticmethod
+ def _construct_axes_dict_from(self, axes, **kwargs):
+ """ return an axes dictionary for the passed axes """
+ d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,axes) ])
+ d.update(kwargs)
+ return d
+
+ def _construct_axes_dict_for_slice(self, axes = None, **kwargs):
""" return an axes dictionary for myself """
- return dict([ (self._AXIS_SLICEMAP[a],getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+ d = dict([ (self._AXIS_SLICEMAP[a],getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+ d.update(kwargs)
+ return d
__add__ = _arith_method(operator.add, '__add__')
__sub__ = _arith_method(operator.sub, '__sub__')
@@ -296,8 +265,7 @@ def _from_axes(cls, data, axes):
if isinstance(data, BlockManager):
return cls(data)
else:
- d = dict([ (i, a) for i, a in zip(cls._AXIS_ORDERS,axes) ])
- d['copy'] = False
+ d = cls._construct_axes_dict_from(cls, axes, copy = False)
return cls(data, **d)
def _init_dict(self, data, axes, dtype=None):
@@ -436,8 +404,7 @@ def __array__(self, dtype=None):
return self.values
def __array_wrap__(self, result):
- d = self._construct_axes_dict(self._AXIS_ORDERS)
- d['copy'] = False
+ d = self._construct_axes_dict(self._AXIS_ORDERS, copy = False)
return self._constructor(result, **d)
#----------------------------------------------------------------------
@@ -455,8 +422,7 @@ def _compare_constructor(self, other, func):
for col in getattr(self,self._info_axis):
new_data[col] = func(self[col], other[col])
- d = self._construct_axes_dict()
- d['copy'] = False
+ d = self._construct_axes_dict(copy = False)
return self._constructor(data=new_data, **d)
# boolean operators
@@ -552,9 +518,7 @@ def iteritems(self):
iterkv = iteritems
def _get_plane_axes(self, axis):
- """
-
- """
+ """ get my plane axes: these are already (as compared with higher level planes), as we are returning a DataFrame axes """
axis = self._get_axis_name(axis)
if axis == 'major_axis':
@@ -580,8 +544,7 @@ def ix(self):
return self._ix
def _wrap_array(self, arr, axes, copy=False):
- d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,axes) ])
- d['copy'] = False
+ d = self._construct_axes_dict_from(self, axes, copy = copy)
return self._constructor(arr, **d)
fromDict = from_dict
@@ -627,7 +590,7 @@ def to_excel(self, path, na_rep=''):
# TODO: needed?
def keys(self):
- return list(self.items)
+ return list(getattr(self,self._info_axis))
def _get_values(self):
self._consolidate_inplace()
@@ -684,9 +647,8 @@ def set_value(self, *args):
frame.set_value(*args[1:])
return self
except KeyError:
- axes = self._expand_axes(args)
- d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,axes) ])
- d['copy'] = False
+ axes = self._expand_axes(args)
+ d = self._construct_axes_dict_from(self, axes, copy = False)
result = self.reindex(**d)
likely_dtype = com._infer_dtype(args[-1])
@@ -916,8 +878,7 @@ def reindex_like(self, other, method=None):
-------
reindexed : Panel
"""
- d = other._construct_axes_dict()
- d['method'] = method
+ d = other._construct_axes_dict(method = method)
return self.reindex(**d)
def dropna(self, axis=0, how='any'):
@@ -1046,16 +1007,6 @@ def bfill(self):
return self.fillna(method='bfill')
- add = _panel_arith_method(operator.add, 'add')
- subtract = sub = _panel_arith_method(operator.sub, 'subtract')
- multiply = mul = _panel_arith_method(operator.mul, 'multiply')
-
- try:
- divide = div = _panel_arith_method(operator.div, 'divide')
- except AttributeError: # pragma: no cover
- # Python 3
- divide = div = _panel_arith_method(operator.truediv, 'divide')
-
def major_xs(self, key, copy=True):
"""
Return slice of panel along major axis
@@ -1203,7 +1154,7 @@ def transpose(self, *args, **kwargs):
if len(axes) != len(set(axes)):
raise ValueError('Must specify %s unique axes' % self._AXIS_LEN)
- new_axes = dict([ (a,self._get_axis(x)) for a, x in zip(self._AXIS_ORDERS,axes)])
+ new_axes = self._construct_axes_dict_from(self, [ self._get_axis(x) for x in axes])
new_values = self.values.transpose(tuple(axes))
if kwargs.get('copy') or (len(args) and args[-1]):
new_values = new_values.copy()
@@ -1337,56 +1288,6 @@ def count(self, axis='major'):
return self._wrap_result(result, axis)
- @Substitution(desc='sum', outname='sum')
- @Appender(_agg_doc)
- def sum(self, axis='major', skipna=True):
- return self._reduce(nanops.nansum, axis=axis, skipna=skipna)
-
- @Substitution(desc='mean', outname='mean')
- @Appender(_agg_doc)
- def mean(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmean, axis=axis, skipna=skipna)
-
- @Substitution(desc='unbiased variance', outname='variance')
- @Appender(_agg_doc)
- def var(self, axis='major', skipna=True):
- return self._reduce(nanops.nanvar, axis=axis, skipna=skipna)
-
- @Substitution(desc='unbiased standard deviation', outname='stdev')
- @Appender(_agg_doc)
- def std(self, axis='major', skipna=True):
- return self.var(axis=axis, skipna=skipna).apply(np.sqrt)
-
- @Substitution(desc='unbiased skewness', outname='skew')
- @Appender(_agg_doc)
- def skew(self, axis='major', skipna=True):
- return self._reduce(nanops.nanskew, axis=axis, skipna=skipna)
-
- @Substitution(desc='product', outname='prod')
- @Appender(_agg_doc)
- def prod(self, axis='major', skipna=True):
- return self._reduce(nanops.nanprod, axis=axis, skipna=skipna)
-
- @Substitution(desc='compounded percentage', outname='compounded')
- @Appender(_agg_doc)
- def compound(self, axis='major', skipna=True):
- return (1 + self).prod(axis=axis, skipna=skipna) - 1
-
- @Substitution(desc='median', outname='median')
- @Appender(_agg_doc)
- def median(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmedian, axis=axis, skipna=skipna)
-
- @Substitution(desc='maximum', outname='maximum')
- @Appender(_agg_doc)
- def max(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmax, axis=axis, skipna=skipna)
-
- @Substitution(desc='minimum', outname='minimum')
- @Appender(_agg_doc)
- def min(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmin, axis=axis, skipna=skipna)
-
def shift(self, lags, axis='major'):
"""
Shift major or minor axis by specified number of leads/lags. Drops
@@ -1524,9 +1425,11 @@ def update(self, other, join='left', overwrite=True, filter_func=None,
if not isinstance(other, self._constructor):
other = self._constructor(other)
- other = other.reindex(items=self.items)
+ axis = self._info_axis
+ axis_values = getattr(self,axis)
+ other = other.reindex(**{ axis : axis_values })
- for frame in self.items:
+ for frame in axis_values:
self[frame].update(other[frame], join, overwrite, filter_func,
raise_conflict)
@@ -1645,6 +1548,124 @@ def _extract_axis(self, data, axis=0, intersect=False):
return _ensure_index(index)
+ @classmethod
+ def _add_aggregate_operations(cls):
+ """ add the operations to the cls; evaluate the doc strings again """
+
+ # doc strings substitors
+ _agg_doc = """
+Wrapper method for %s
+
+Parameters
+----------
+other : """ + "%s or %s" % (cls._constructor_sliced.__name__,cls.__name__) + """
+axis : {""" + ', '.join(cls._AXIS_ORDERS) + "}" + """
+Axis to broadcast over
+
+Returns
+-------
+""" + cls.__name__ + "\n"
+
+ def _panel_arith_method(op, name):
+ @Substitution(op)
+ @Appender(_agg_doc)
+ def f(self, other, axis = 0):
+ return self._combine(other, op, axis=axis)
+ f.__name__ = name
+ return f
+
+ cls.add = _panel_arith_method(operator.add, 'add')
+ cls.subtract = cls.sub = _panel_arith_method(operator.sub, 'subtract')
+ cls.multiply = cls.mul = _panel_arith_method(operator.mul, 'multiply')
+
+ try:
+ cls.divide = cls.div = _panel_arith_method(operator.div, 'divide')
+ except AttributeError: # pragma: no cover
+ # Python 3
+ cls.divide = cls.div = _panel_arith_method(operator.truediv, 'divide')
+
+
+ _agg_doc = """
+Return %(desc)s over requested axis
+
+Parameters
+----------
+axis : {""" + ', '.join(cls._AXIS_ORDERS) + "} or {" + ', '.join([ str(i) for i in range(cls._AXIS_LEN) ]) + """}
+skipna : boolean, default True
+ Exclude NA/null values. If an entire row/column is NA, the result
+ will be NA
+
+Returns
+-------
+%(outname)s : """ + cls._constructor_sliced.__name__ + "\n"
+
+ _na_info = """
+
+NA/null values are %s.
+If all values are NA, result will be NA"""
+
+ @Substitution(desc='sum', outname='sum')
+ @Appender(_agg_doc)
+ def sum(self, axis='major', skipna=True):
+ return self._reduce(nanops.nansum, axis=axis, skipna=skipna)
+ cls.sum = sum
+
+ @Substitution(desc='mean', outname='mean')
+ @Appender(_agg_doc)
+ def mean(self, axis='major', skipna=True):
+ return self._reduce(nanops.nanmean, axis=axis, skipna=skipna)
+ cls.mean = mean
+
+ @Substitution(desc='unbiased variance', outname='variance')
+ @Appender(_agg_doc)
+ def var(self, axis='major', skipna=True):
+ return self._reduce(nanops.nanvar, axis=axis, skipna=skipna)
+ cls.var = var
+
+ @Substitution(desc='unbiased standard deviation', outname='stdev')
+ @Appender(_agg_doc)
+ def std(self, axis='major', skipna=True):
+ return self.var(axis=axis, skipna=skipna).apply(np.sqrt)
+ cls.std = std
+
+ @Substitution(desc='unbiased skewness', outname='skew')
+ @Appender(_agg_doc)
+ def skew(self, axis='major', skipna=True):
+ return self._reduce(nanops.nanskew, axis=axis, skipna=skipna)
+ cls.skew = skew
+
+ @Substitution(desc='product', outname='prod')
+ @Appender(_agg_doc)
+ def prod(self, axis='major', skipna=True):
+ return self._reduce(nanops.nanprod, axis=axis, skipna=skipna)
+ cls.prod = prod
+
+ @Substitution(desc='compounded percentage', outname='compounded')
+ @Appender(_agg_doc)
+ def compound(self, axis='major', skipna=True):
+ return (1 + self).prod(axis=axis, skipna=skipna) - 1
+ cls.compound = compound
+
+ @Substitution(desc='median', outname='median')
+ @Appender(_agg_doc)
+ def median(self, axis='major', skipna=True):
+ return self._reduce(nanops.nanmedian, axis=axis, skipna=skipna)
+ cls.median = median
+
+ @Substitution(desc='maximum', outname='maximum')
+ @Appender(_agg_doc)
+ def max(self, axis='major', skipna=True):
+ return self._reduce(nanops.nanmax, axis=axis, skipna=skipna)
+ cls.max = max
+
+ @Substitution(desc='minimum', outname='minimum')
+ @Appender(_agg_doc)
+ def min(self, axis='major', skipna=True):
+ return self._reduce(nanops.nanmin, axis=axis, skipna=skipna)
+ cls.min = min
+
+Panel._add_aggregate_operations()
+
WidePanel = Panel
LongPanel = DataFrame
@@ -1660,7 +1681,7 @@ def install_ipython_completers(): # pragma: no cover
@complete_object.when_type(Panel)
def complete_dataframe(obj, prev_completions):
- return prev_completions + [c for c in obj.items \
+ return prev_completions + [c for c in obj.keys() \
if isinstance(c, basestring) and py3compat.isidentifier(c)]
# Importing IPython brings in about 200 modules, so we want to avoid it unless
diff --git a/pandas/core/panel4d.py b/pandas/core/panel4d.py
index 649739f3f60ce..71e3e7d68e252 100644
--- a/pandas/core/panel4d.py
+++ b/pandas/core/panel4d.py
@@ -1,9 +1,9 @@
""" Panel4D: a 4-d dict like collection of panels """
+from pandas.core.panelnd import create_nd_panel_factory
from pandas.core.panel import Panel
-from pandas.core import panelnd
-Panel4D = panelnd.create_nd_panel_factory(
+Panel4D = create_nd_panel_factory(
klass_name = 'Panel4D',
axis_orders = [ 'labels','items','major_axis','minor_axis'],
axis_slices = { 'labels' : 'labels', 'items' : 'items',
diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py
index 22f6dac6b640c..b7d2e29a3c79d 100644
--- a/pandas/core/panelnd.py
+++ b/pandas/core/panelnd.py
@@ -1,7 +1,5 @@
""" Factory methods to create N-D panels """
-import pandas
-from pandas.core.panel import Panel
import pandas.lib as lib
def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_aliases = None, stat_axis = 2):
@@ -27,6 +25,14 @@ def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_a
"""
+ # if slicer is a name, get the object
+ if isinstance(slicer,basestring):
+ import pandas
+ try:
+ slicer = getattr(pandas,slicer)
+ except:
+ raise Exception("cannot create this slicer [%s]" % slicer)
+
# build the klass
klass = type(klass_name, (slicer,),{})
@@ -40,6 +46,7 @@ def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_a
klass._default_stat_axis = stat_axis
klass._het_axis = 0
klass._info_axis = axis_orders[klass._het_axis]
+
klass._constructor_sliced = slicer
# add the axes
@@ -96,49 +103,13 @@ def _combine_with_constructor(self, other, func):
klass._combine_with_constructor = _combine_with_constructor
# set as NonImplemented operations which we don't support
- for f in ['to_frame','to_excel','to_sparse','groupby','join','_get_join_index']:
+ for f in ['to_frame','to_excel','to_sparse','groupby','join','filter','dropna','shift','take']:
def func(self, *args, **kwargs):
raise NotImplementedError
setattr(klass,f,func)
- return klass
-
-
-if __name__ == '__main__':
-
- # create a sample
- from pandas.util import testing
- print pandas.__version__
+ # add the aggregate operations
+ klass._add_aggregate_operations()
- # create a 4D
- Panel4DNew = create_nd_panel_factory(
- klass_name = 'Panel4DNew',
- axis_orders = ['labels1','items1','major_axis','minor_axis'],
- axis_slices = { 'items1' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
- slicer = Panel,
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
-
- p4dn = Panel4DNew(dict(L1 = testing.makePanel(), L2 = testing.makePanel()))
- print "creating a 4-D Panel"
- print p4dn, "\n"
-
- # create a 5D
- Panel5DNew = create_nd_panel_factory(
- klass_name = 'Panel5DNew',
- axis_orders = [ 'cool1', 'labels1','items1','major_axis','minor_axis'],
- axis_slices = { 'labels1' : 'labels1', 'items1' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
- slicer = Panel4DNew,
- axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
-
- p5dn = Panel5DNew(dict(C1 = p4dn))
-
- print "creating a 5-D Panel"
- print p5dn, "\n"
-
- print "Slicing p5dn"
- print p5dn.ix['C1',:,:,0:3,:], "\n"
+ return klass
- print "Transposing p5dn"
- print p5dn.transpose(1,2,3,4,0), "\n"
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index bdfc4933f31ab..39d7d5fd08245 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -890,11 +890,38 @@ def test_to_frame_mixed(self):
# self.assertEqual(wp['bool'].values.dtype, np.bool_)
# assert_frame_equal(wp['bool'], panel['bool'])
+ def test_update(self):
+
+ p4d = Panel4D([[[[1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]],
+ [[1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]]])
+
+ other = Panel4D([[[[3.6, 2., np.nan]],
+ [[np.nan, np.nan, 7]]]])
+
+ p4d.update(other)
+
+ expected = Panel4D([[[[3.6, 2, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]],
+ [[1.5, np.nan, 7],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.],
+ [1.5, np.nan, 3.]]]])
+
+ assert_panel4d_equal(p4d, expected)
+
def test_filter(self):
- pass
+ raise nose.SkipTest
def test_apply(self):
- pass
+ raise nose.SkipTest
def test_compound(self):
raise nose.SkipTest
diff --git a/pandas/tests/test_panelnd.py b/pandas/tests/test_panelnd.py
index 0d8a8c2023014..6cf0afe11c7e2 100644
--- a/pandas/tests/test_panelnd.py
+++ b/pandas/tests/test_panelnd.py
@@ -36,6 +36,32 @@ def test_4d_construction(self):
p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
+ def test_4d_construction_alt(self):
+
+ # create a 4D
+ Panel4D = panelnd.create_nd_panel_factory(
+ klass_name = 'Panel4D',
+ axis_orders = ['labels','items','major_axis','minor_axis'],
+ axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = 'Panel',
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+ p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
+
+ def test_4d_construction_error(self):
+
+ # create a 4D
+ self.assertRaises(Exception,
+ panelnd.create_nd_panel_factory,
+ klass_name = 'Panel4D',
+ axis_orders = ['labels','items','major_axis','minor_axis'],
+ axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = 'foo',
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+
def test_5d_construction(self):
# create a 4D
| added generic docstrings for sub-classes so that they show the correct object type/axes for that class
update/keys methods corrected for sub-classess
more tests
| https://api.github.com/repos/pandas-dev/pandas/pulls/2473 | 2012-12-10T15:49:57Z | 2012-12-11T20:20:50Z | null | 2014-06-29T00:33:46Z |
BUG: DatetimeIndex.append does not preserve timezone #2260 | diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 50a03950cdd20..44801f6aae8a2 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -476,18 +476,41 @@ def _check_extension_indexlabels(self, ext):
self.assertEqual(frame.index.names, recons.index.names)
#test index_labels in same row as column names
- self.frame.to_excel('/tmp/tests.xls', 'test1',
+ path = '%s.xls' % tm.rands(10)
+ self.frame.to_excel(path, 'test1',
cols=['A', 'B', 'C', 'D'], index=False)
#take 'A' and 'B' as indexes (they are in same row as cols 'C', 'D')
df = self.frame.copy()
df = df.set_index(['A', 'B'])
- reader = ExcelFile('/tmp/tests.xls')
+ reader = ExcelFile(path)
recons = reader.parse('test1', index_col=[0, 1])
tm.assert_frame_equal(df, recons)
os.remove(path)
+ def test_excel_roundtrip_indexname(self):
+ _skip_if_no_xlrd()
+ _skip_if_no_xlwt()
+
+ path = '%s.xls' % tm.rands(10)
+
+ df = DataFrame(np.random.randn(10, 4))
+ df.index.name = 'foo'
+
+ df.to_excel(path)
+
+ xf = ExcelFile(path)
+ result = xf.parse(xf.sheet_names[0], index_col=0)
+
+ tm.assert_frame_equal(result, df)
+ self.assertEqual(result.index.name, 'foo')
+
+ try:
+ os.remove(path)
+ except os.error:
+ pass
+
def test_excel_roundtrip_datetime(self):
_skip_if_no_xlrd()
_skip_if_no_xlwt()
@@ -787,4 +810,3 @@ def test_to_excel_header_styling_xlsx(self):
if __name__ == '__main__':
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index dcbcf79ceb479..13833b459fa40 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -605,37 +605,6 @@ def summary(self, name=None):
return result
- def append(self, other):
- """
- Append a collection of Index options together
-
- Parameters
- ----------
- other : Index or list/tuple of indices
-
- Returns
- -------
- appended : Index
- """
- name = self.name
- to_concat = [self]
-
- if isinstance(other, (list, tuple)):
- to_concat = to_concat + list(other)
- else:
- to_concat.append(other)
-
- for obj in to_concat:
- if isinstance(obj, Index) and obj.name != name:
- name = None
- break
-
- to_concat = self._ensure_compat_concat(to_concat)
- to_concat = [x.values if isinstance(x, Index) else x
- for x in to_concat]
-
- return Index(com._concat_compat(to_concat), name=name)
-
def get_duplicates(self):
values = Index.get_duplicates(self)
return DatetimeIndex(values)
@@ -864,6 +833,36 @@ def union_many(self, others):
this.offset = to_offset(this.inferred_freq)
return this
+ def append(self, other):
+ """
+ Append a collection of Index options together
+
+ Parameters
+ ----------
+ other : Index or list/tuple of indices
+
+ Returns
+ -------
+ appended : Index
+ """
+ name = self.name
+ to_concat = [self]
+
+ if isinstance(other, (list, tuple)):
+ to_concat = to_concat + list(other)
+ else:
+ to_concat.append(other)
+
+ for obj in to_concat:
+ if isinstance(obj, Index) and obj.name != name:
+ name = None
+ break
+
+ to_concat = self._ensure_compat_concat(to_concat)
+ to_concat, factory = _process_concat_data(to_concat, name)
+
+ return factory(com._concat_compat(to_concat))
+
def join(self, other, how='left', level=None, return_indexers=False):
"""
See Index.join
@@ -1633,3 +1632,33 @@ def _in_range(start, end, rng_start, rng_end):
def _time_to_micros(time):
seconds = time.hour * 60 * 60 + 60 * time.minute + time.second
return 1000000 * seconds + time.microsecond
+
+def _process_concat_data(to_concat, name):
+ klass = Index
+ kwargs = {}
+
+ all_dti = True
+ need_utc_convert = False
+ tz = None
+ for x in to_concat:
+ if not isinstance(x, DatetimeIndex):
+ all_dti = False
+ else:
+ if tz is None:
+ tz = x.tz
+ elif x.tz != tz:
+ need_utc_convert = True
+ tz = 'UTC'
+
+ if need_utc_convert:
+ to_concat = [x.tz_convert('UTC') for x in to_concat]
+
+ if all_dti:
+ klass = DatetimeIndex
+ kwargs = {'tz' : tz}
+
+ to_concat = [x.values if isinstance(x, Index) else x
+ for x in to_concat]
+
+ factory_func = lambda x: klass(x, name=name, **kwargs)
+ return to_concat, factory_func
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 120a1414b9851..5c113b843cae0 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -668,6 +668,35 @@ def test_align_aware(self):
self.assertEqual(df1.index.tz, new1.index.tz)
self.assertEqual(df2.index.tz, new2.index.tz)
+ def test_append_aware(self):
+ rng1 = date_range('1/1/2011 01:00', periods=1, freq='H',
+ tz='US/Eastern')
+ rng2 = date_range('1/1/2011 02:00', periods=1, freq='H',
+ tz='US/Eastern')
+ ts1 = Series(np.random.randn(len(rng1)), index=rng1)
+ ts2 = Series(np.random.randn(len(rng2)), index=rng2)
+ ts_result = ts1.append(ts2)
+ self.assertEqual(ts_result.index.tz, rng1.tz)
+
+ rng1 = date_range('1/1/2011 01:00', periods=1, freq='H',
+ tz='UTC')
+ rng2 = date_range('1/1/2011 02:00', periods=1, freq='H',
+ tz='UTC')
+ ts1 = Series(np.random.randn(len(rng1)), index=rng1)
+ ts2 = Series(np.random.randn(len(rng2)), index=rng2)
+ ts_result = ts1.append(ts2)
+ utc = rng1.tz
+ self.assertEqual(utc, ts_result.index.tz)
+
+ rng1 = date_range('1/1/2011 01:00', periods=1, freq='H',
+ tz='US/Eastern')
+ rng2 = date_range('1/1/2011 02:00', periods=1, freq='H',
+ tz='US/Central')
+ ts1 = Series(np.random.randn(len(rng1)), index=rng1)
+ ts2 = Series(np.random.randn(len(rng2)), index=rng2)
+ ts_result = ts1.append(ts2)
+ self.assertEqual(utc, ts_result.index.tz)
+
def test_equal_join_ensure_utc(self):
rng = date_range('1/1/2011', periods=10, freq='H', tz='US/Eastern')
ts = Series(np.random.randn(len(rng)), index=rng)
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 7f3c022f45f63..a501e0e86331e 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -175,3 +175,16 @@ def date_range(start=None, end=None, periods=None, freq=None):
datetimeindex_normalize = \
Benchmark('rng.normalize()', setup,
start_date=datetime(2012, 9, 1))
+
+setup = common_setup + """
+from pandas.tseries.offsets import Second
+s1 = date_range('1/1/2000', periods=100, freq='S')
+curr = s1[-1]
+slst = []
+for i in range(100):
+ slst.append(curr + Second(), periods=100, freq='S')
+ curr = slst[-1][-1]
+"""
+
+dti_append_tz = \
+ Benchmark('s1.append(slst)', setup, start_date=datetime(2012, 9 ,1))
| https://api.github.com/repos/pandas-dev/pandas/pulls/2470 | 2012-12-10T15:23:16Z | 2012-12-10T15:32:57Z | null | 2012-12-10T15:32:57Z | |
BUG: DataFrame.reset_index loses timezone information | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f09f9cb6fc72b..392fad952d5a7 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -35,6 +35,7 @@
from pandas.util.decorators import deprecate, Appender, Substitution
from pandas.tseries.period import PeriodIndex
+from pandas.tseries.index import DatetimeIndex
import pandas.core.algorithms as algos
import pandas.core.datetools as datetools
@@ -2813,6 +2814,9 @@ def _maybe_cast(values):
name = tuple(name_lst)
if isinstance(self.index, PeriodIndex):
values = self.index.asobject
+ elif (isinstance(self.index, DatetimeIndex) and
+ self.index.tz is not None):
+ values = self.index.asobject
else:
values = self.index.values
new_obj.insert(0, name, _maybe_cast(values))
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 120a1414b9851..432122f372e45 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -529,6 +529,14 @@ def test_frame_from_records_utc(self):
# it works
DataFrame.from_records([rec], index='begin_time')
+ def test_frame_reset_index(self):
+ dr = date_range('2012-06-02', periods=10, tz='US/Eastern')
+ df = DataFrame(np.random.randn(len(dr)), dr)
+ roundtripped = df.reset_index().set_index('index')
+ xp = df.index.tz
+ rs = roundtripped.index.tz
+ self.assertEquals(xp, rs)
+
def test_dateutil_tzoffset_support(self):
from dateutil.tz import tzoffset
values = [188.5, 328.25]
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 7f3c022f45f63..863fdf47e4f22 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -175,3 +175,19 @@ def date_range(start=None, end=None, periods=None, freq=None):
datetimeindex_normalize = \
Benchmark('rng.normalize()', setup,
start_date=datetime(2012, 9, 1))
+
+setup = common_setup + """
+rng = date_range('1/1/2000', periods=100000, freq='H')
+df = DataFrame(np.random.randn(len(rng), 2), rng)
+"""
+
+dti_reset_index = \
+ Benchmark('df.reset_index()', setup, start_date=datetime(2012,9,1))
+
+setup = common_setup + """
+rng = date_range('1/1/2000', periods=100000, freq='H')
+df = DataFrame(np.random.randn(len(rng), 2), rng, tz='US/Eastern')
+"""
+
+dti_reset_index_tz = \
+ Benchmark('df.reset_index()', setup, start_date=datetime(2012,9,1))
| #2262
| https://api.github.com/repos/pandas-dev/pandas/pulls/2468 | 2012-12-10T07:09:50Z | 2012-12-10T15:37:20Z | null | 2014-06-25T00:23:20Z |
BUG: DataFrame.filter(like=*) should treat non-string values as strings | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f09f9cb6fc72b..d9def9387eacf 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2886,7 +2886,7 @@ def filter(self, items=None, like=None, regex=None):
if items is not None:
return self.reindex(columns=[r for r in items if r in self])
elif like:
- return self.select(lambda x: like in x, axis=1)
+ return self.select(lambda x: like in str(x), axis=1)
elif regex:
matcher = re.compile(regex)
return self.select(lambda x: matcher.search(x) is not None, axis=1)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 931d559b51595..a0136129f06df 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -5888,6 +5888,11 @@ def test_filter(self):
self.assertEqual(len(filtered.columns), 2)
self.assert_('AA' in filtered)
+ # like with ints in column names
+ df = DataFrame(0., index=[0,1,2], columns=[0,1,'_A','_B'])
+ filtered = df.filter(like='_')
+ self.assertEqual(len(filtered.columns), 2)
+
# pass in None
self.assertRaises(Exception, self.frame.filter, items=None)
| ...(issue #2464)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2467 | 2012-12-09T16:58:30Z | 2012-12-10T15:47:38Z | null | 2014-06-15T21:22:44Z |
DataFrame ctor should respect col ordering in OrderedDict | diff --git a/RELEASE.rst b/RELEASE.rst
index d90b54fa4cf64..b3cb6cba50fd1 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -106,6 +106,8 @@ pandas 0.10.0
(#2412)
- Added boolean comparison operators to Panel
- Enable ``Series.str.strip/lstrip/rstrip`` methods to take an argument (#2411)
+ - The DataFrame ctor now respects column ordering when given
+ an OrderedDict (#2455)
**Bug fixes**
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 19a3e87e8dca0..f09f9cb6fc72b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -467,6 +467,7 @@ def _init_dict(self, data, index, columns, dtype=None):
Segregate Series based on type and coerce into matrices.
Needs to handle a lot of exceptional cases.
"""
+ from pandas.util.compat import OrderedDict
if dtype is not None:
dtype = np.dtype(dtype)
@@ -503,7 +504,10 @@ def _init_dict(self, data, index, columns, dtype=None):
data_names.append(k)
arrays.append(v)
else:
- columns = data_names = Index(_try_sort(data.keys()))
+ keys = data.keys()
+ if not isinstance(data, OrderedDict):
+ keys = _try_sort(data.keys())
+ columns = data_names = Index(keys)
arrays = [data[k] for k in columns]
return _arrays_to_mgr(arrays, data_names, index, columns,
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 439f15b9906e5..931d559b51595 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1798,6 +1798,16 @@ def test_is_mixed_type(self):
self.assert_(not self.frame._is_mixed_type)
self.assert_(self.mixed_frame._is_mixed_type)
+ def test_constructor_ordereddict(self):
+ from pandas.util.compat import OrderedDict
+ import random
+ nitems = 100
+ nums = range(nitems)
+ random.shuffle(nums)
+ expected=['A%d' %i for i in nums]
+ df=DataFrame(OrderedDict(zip(expected,[[0]]*nitems)))
+ self.assertEqual(expected,list(df.columns))
+
def test_constructor_dict(self):
frame = DataFrame({'col1' : self.ts1,
'col2' : self.ts2})
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 3a3d833e282a6..d252b8eb640b8 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -317,6 +317,7 @@ def test_groups(self):
self.assert_((self.df.ix[v]['B'] == k[1]).all())
def test_aggregate_str_func(self):
+ from pandas.util.compat import OrderedDict
def _check_results(grouped):
# single series
result = grouped['A'].agg('std')
@@ -329,10 +330,10 @@ def _check_results(grouped):
assert_frame_equal(result, expected)
# group frame by function dict
- result = grouped.agg({'A' : 'var', 'B' : 'std', 'C' : 'mean'})
- expected = DataFrame({'A' : grouped['A'].var(),
- 'B' : grouped['B'].std(),
- 'C' : grouped['C'].mean()})
+ result = grouped.agg(OrderedDict([['A' , 'var'], ['B' , 'std'], ['C' , 'mean']]))
+ expected = DataFrame(OrderedDict([['A', grouped['A'].var()],
+ ['B', grouped['B'].std()],
+ ['C', grouped['C'].mean()]]))
assert_frame_equal(result, expected)
by_weekday = self.tsframe.groupby(lambda x: x.weekday())
@@ -810,6 +811,7 @@ def _check_op(op):
assert_series_equal(result, expected)
def test_groupby_as_index_agg(self):
+ from pandas.util.compat import OrderedDict
grouped = self.df.groupby('A', as_index=False)
# single-key
@@ -818,7 +820,7 @@ def test_groupby_as_index_agg(self):
expected = grouped.mean()
assert_frame_equal(result, expected)
- result2 = grouped.agg({'C' : np.mean, 'D' : np.sum})
+ result2 = grouped.agg(OrderedDict([['C' , np.mean], ['D' , np.sum]]))
expected2 = grouped.mean()
expected2['D'] = grouped.sum()['D']
assert_frame_equal(result2, expected2)
@@ -837,7 +839,7 @@ def test_groupby_as_index_agg(self):
expected = grouped.mean()
assert_frame_equal(result, expected)
- result2 = grouped.agg({'C' : np.mean, 'D' : np.sum})
+ result2 = grouped.agg(OrderedDict([['C' , np.mean], ['D' , np.sum]]))
expected2 = grouped.mean()
expected2['D'] = grouped.sum()['D']
assert_frame_equal(result2, expected2)
@@ -1883,8 +1885,8 @@ def test_more_flexible_frame_multi_function(self):
grouped = self.df.groupby('A')
- exmean = grouped.agg({'C' : np.mean, 'D' : np.mean})
- exstd = grouped.agg({'C' : np.std, 'D' : np.std})
+ exmean = grouped.agg(OrderedDict([['C' , np.mean], ['D' , np.mean]]))
+ exstd = grouped.agg(OrderedDict([['C' , np.std], ['D' , np.std]]))
expected = concat([exmean, exstd], keys=['mean', 'std'], axis=1)
expected = expected.swaplevel(0, 1, axis=1).sortlevel(0, axis=1)
@@ -1895,39 +1897,43 @@ def test_more_flexible_frame_multi_function(self):
assert_frame_equal(result, expected)
# be careful
- result = grouped.aggregate({'C' : np.mean,
- 'D' : [np.mean, np.std]})
- expected = grouped.aggregate({'C' : [np.mean],
- 'D' : [np.mean, np.std]})
+ result = grouped.aggregate(OrderedDict([['C' , np.mean],
+ [ 'D' , [np.mean, np.std]]]))
+ expected = grouped.aggregate(OrderedDict([['C' , np.mean],
+ [ 'D' , [np.mean, np.std]]]))
assert_frame_equal(result, expected)
def foo(x): return np.mean(x)
def bar(x): return np.std(x, ddof=1)
- result = grouped.aggregate({'C' : np.mean,
- 'D' : OrderedDict([['foo', np.mean],
- ['bar', np.std]])})
+ d=OrderedDict([['C' , np.mean],
+ ['D', OrderedDict([['foo', np.mean],
+ ['bar', np.std]])]])
+ result = grouped.aggregate(d)
- expected = grouped.aggregate({'C' : [np.mean],
- 'D' : [foo, bar]})
+ d = OrderedDict([['C' , [np.mean]],['D' , [foo, bar]]])
+ expected = grouped.aggregate(d)
assert_frame_equal(result, expected)
def test_multi_function_flexible_mix(self):
# GH #1268
-
+ from pandas.util.compat import OrderedDict
grouped = self.df.groupby('A')
- result = grouped.aggregate({'C' : {'foo' : 'mean',
- 'bar' : 'std'},
- 'D' : 'sum'})
- result2 = grouped.aggregate({'C' : {'foo' : 'mean',
- 'bar' : 'std'},
- 'D' : ['sum']})
-
- expected = grouped.aggregate({'C' : {'foo' : 'mean',
- 'bar' : 'std'},
- 'D' : {'sum' : 'sum'}})
+ d = OrderedDict([['C' , OrderedDict([['foo' , 'mean'],
+ ['bar' , 'std']])],
+ ['D' , 'sum']])
+ result = grouped.aggregate(d)
+ d2 = OrderedDict([['C' , OrderedDict([['foo' , 'mean'],
+ ['bar' , 'std']])],
+ ['D' ,[ 'sum']]])
+ result2 = grouped.aggregate(d2)
+
+ d3 = OrderedDict([['C' , OrderedDict([['foo' , 'mean'],
+ ['bar' , 'std']])],
+ ['D' , {'sum':'sum'}]])
+ expected = grouped.aggregate(d3)
assert_frame_equal(result, expected)
assert_frame_equal(result2, expected)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2456 | 2012-12-08T18:03:54Z | 2012-12-08T22:35:54Z | 2012-12-08T22:35:54Z | 2012-12-08T22:36:04Z | |
DOC: revised whatsnew 0.10 docs - only tiny change | diff --git a/doc/source/io.rst b/doc/source/io.rst
index a81899078f3ae..f81037975591f 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -795,7 +795,7 @@ Objects can be written to the file just like adding key-value pairs to a dict:
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
- # store.put('s', s') is an equivalent method
+ # store.put('s', s) is an equivalent method
store['s'] = s
store['df'] = df
@@ -853,15 +853,19 @@ after data is already in the table (this may become automatic in the future or a
store = HDFStore('store.h5')
df1 = df[0:4]
df2 = df[4:]
+
+ # append data (creates a table automatically)
store.append('df', df1)
store.append('df', df2)
store
+ # select the entire object
store.select('df')
# the type of stored data
store.handle.root.df._v_attrs.pandas_type
+ # create an index
store.create_table_index('df')
store.handle.root.df.table
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 2c909a1b26f2d..c72a70534a3a2 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -90,7 +90,6 @@ Updated PyTables Support
- performance improvments on table writing
- support for arbitrarily indexed dimensions
- - ``SparseSeries`` now has a ``density`` property (#2384)
**Bug Fixes**
@@ -111,8 +110,6 @@ Updated PyTables Support
import os
os.remove('store.h5')
- - Implement ``value_vars`` in ``melt`` and add ``melt`` to pandas namespace (GH2412_)
-
N Dimensional Panels (Experimental)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -130,8 +127,6 @@ Adding experimental support for Panel4D and factory functions to create n-dimens
-- Enable ``Series.str.strip/lstrip/rstrip`` methods to take an argument (GH2411_)
-
API changes
~~~~~~~~~~~
@@ -142,14 +137,14 @@ API changes
def f(x):
return Series([ x, x**2 ], index = ['x', 'x^s'])
s = Series(np.random.rand(5))
- s
+ s
s.apply(f)
- This is conceptually similar to the following.
+ - Enable ``Series.str.strip/lstrip/rstrip`` methods to take an argument (GH2411_)
- .. ipython:: python
+ - Implement ``value_vars`` in ``melt`` and add ``melt`` to pandas namespace (GH2412_)
- concat([ f(y) for x, y in s.iteritems() ], axis=1).T
+ - ``SparseSeries`` now has a ``density`` property (#2384)
- New API functions for working with pandas options (GH2097_):
| typos in io.rst
moved some api changes docs to the proper place
| https://api.github.com/repos/pandas-dev/pandas/pulls/2454 | 2012-12-08T02:49:18Z | 2012-12-09T18:18:41Z | null | 2012-12-09T18:18:41Z |
BLD: update setup.py numpy version deps for python3 | diff --git a/setup.py b/setup.py
index a29f24ee4cf2b..e5234a177e046 100755
--- a/setup.py
+++ b/setup.py
@@ -51,11 +51,15 @@
setuptools_kwargs = {}
if sys.version_info[0] >= 3:
+ min_numpy_ver = 1.6
+ if sys.version_info[1] >= 3: # 3.3 needs numpy 1.7+
+ min_numpy_ver = 1.7
+
setuptools_kwargs = {'use_2to3': True,
'zip_safe': False,
'install_requires': ['python-dateutil >= 2',
'pytz',
- 'numpy >= 1.4'],
+ 'numpy >= %s' % min_numpy_ver],
'use_2to3_exclude_fixers': ['lib2to3.fixes.fix_next',
],
}
| https://api.github.com/repos/pandas-dev/pandas/pulls/2451 | 2012-12-07T23:09:32Z | 2012-12-07T23:44:57Z | null | 2012-12-07T23:44:57Z | |
ENH: Add batteries-included index and DataFrame generators to tm | diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index e00573c61485e..66d8f98781a19 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -26,6 +26,7 @@
from pandas.tseries.period import PeriodIndex
Index = index.Index
+MultiIndex = index.MultiIndex
Series = series.Series
DataFrame = frame.DataFrame
Panel = panel.Panel
@@ -334,6 +335,165 @@ def makePanel():
def makePanel4D():
return Panel4D(dict(l1 = makePanel(), l2 = makePanel(), l3 = makePanel()))
+def makeCustomIndex(nentries, nlevels, prefix='#', names=False, ndupe_l=None,
+ idx_type=None):
+ """Create an index/multindex with given dimensions, levels, names, etc'
+
+ nentries - number of entries in index
+ nlevels - number of levels (> 1 produces multindex)
+ prefix - a string prefix for labels
+ names - (Optional), bool or list of strings. if True will use default names,
+ if false will use no names, if a list is given, the name of each level
+ in the index will be taken from the list.
+ ndupe_l - (Optional), list of ints, the number of rows for which the
+ label will repeated at the corresponding level, you can specify just
+ the first few, the rest will use the default ndupe_l of 1.
+ idx_type - "i"/"f"/"s"/"u"/"dt".
+ If idx_type is not None, `idx_nlevels` must be 1.
+ "i"/"f" creates an integer/float index,
+ "s"/"u" creates a string/unicode index
+ "dt" create a datetime index.
+
+ if unspecified, string labels will be generated.
+ """
+
+ from collections import Counter
+ if ndupe_l is None:
+ ndupe_l = [1] * nentries
+ assert len(ndupe_l) <= nentries
+ assert names is None or names == False or names == True or len(names) \
+ == nlevels
+ assert idx_type is None or \
+ (idx_type in ('i', 'f', 's', 'u', 'dt') and nlevels == 1)
+
+ if names == True:
+ # build default names
+ names = [prefix + str(i) for i in range(nlevels)]
+ if names == False:
+ # pass None to index constructor for no name
+ names = None
+
+ # make singelton case uniform
+ if isinstance(names, basestring) and nlevels == 1:
+ names = [names]
+
+ # specific 1D index type requested?
+ idx_func = dict(i=makeIntIndex, f=makeFloatIndex, s=makeStringIndex,
+ u=makeUnicodeIndex, dt=makeDateIndex).get(idx_type)
+ if idx_func:
+ idx = idx_func(nentries)
+ # but we need to fill in the name
+ if names:
+ idx.name = names[0]
+ return idx
+ elif idx_type is not None:
+ raise ValueError('"%s" is not a legal value for `idx_type`, use '
+ '"i"/"f"/"s"/"u"/"dt".' % idx_type)
+
+ if len(ndupe_l) < nentries:
+ ndupe_l.extend([1] * (nentries - len(ndupe_l)))
+ assert len(ndupe_l) == nentries
+
+ assert all([x > 0 for x in ndupe_l])
+
+ tuples = []
+ for i in range(nlevels):
+ #build a list of lists to create the index from
+ div_factor = nentries // ndupe_l[i] + 1
+ cnt = Counter()
+ for j in range(div_factor):
+ label = prefix + '_l%d_g' % i + str(j)
+ cnt[label] = ndupe_l[i]
+ # cute Counter trick
+ result = list(sorted(cnt.elements()))[:nentries]
+ tuples.append(result)
+
+ tuples=zip(*tuples)
+
+ # convert tuples to index
+ if nentries == 1:
+ index = Index.from_tuples(tuples[0], name=names[0])
+ else:
+ index = MultiIndex.from_tuples(tuples, names=names)
+ return index
+
+def makeCustomDataframe(nrows, ncols, c_idx_names=True, r_idx_names=True,
+ c_idx_nlevels=1, r_idx_nlevels=1, data_gen_f=None,
+ c_ndupe_l=None, r_ndupe_l=None, dtype=None,
+ c_idx_type=None, r_idx_type=None):
+ """
+ nrows, ncols - number of data rows/cols
+ c_idx_names, idx_names - False/True/list of strings, yields No names ,
+ default names or uses the provided names for the levels of the
+ corresponding index. You can provide a single string when
+ c_idx_nlevels ==1.
+ c_idx_nlevels - number of levels in columns index. > 1 will yield MultiIndex
+ r_idx_nlevels - number of levels in rows index. > 1 will yield MultiIndex
+ data_gen_f - a function f(row,col) which return the data value at that position,
+ the default generator used yields values of the form "RxCy" based on position.
+ c_ndupe_l, r_ndupe_l - list of integers, determines the number
+ of duplicates for each label at a given level of the corresponding index.
+ The default `None` value produces a multiplicity of 1 across
+ all levels, i.e. a unique index. Will accept a partial list of
+ length N < idx_nlevels, for just the first N levels. If ndupe
+ doesn't divide nrows/ncol, the last label might have lower multiplicity.
+ dtype - passed to the DataFrame constructor as is, in case you wish to
+ have more control in conjuncion with a custom `data_gen_f`
+ r_idx_type, c_idx_type - "i"/"f"/"s"/"u"/"dt".
+ If idx_type is not None, `idx_nlevels` must be 1.
+ "i"/"f" creates an integer/float index,
+ "s"/"u" creates a string/unicode index
+ "dt" create a datetime index.
+
+ if unspecified, string labels will be generated.
+
+ Examples:
+
+ # 5 row, 3 columns, default names on both, single index on both axis
+ >> makeCustomDataframe(5,3)
+
+ # make the data a random int between 1 and 100
+ >> mkdf(5,3,data_gen_f=lambda r,c:randint(1,100))
+
+ # 2-level multiindex on rows with each label duplicated twice on first level,
+ # default names on both axis, single index on both axis
+ >> a=makeCustomDataframe(5,3,r_idx_nlevels=2,r_ndupe_l=[2])
+
+ # DatetimeIndex on row, index with unicode labels on columns
+ # no names on either axis
+ >> a=makeCustomDataframe(5,3,c_idx_names=False,r_idx_names=False,
+ r_idx_type="dt",c_idx_type="u")
+
+ # 4-level multindex on rows with names provided, 2-level multindex
+ # on columns with default labels and default names.
+ >> a=makeCustomDataframe(5,3,r_idx_nlevels=4,
+ r_idx_names=["FEE","FI","FO","FAM"],
+ c_idx_nlevels=2)
+
+ """
+
+ assert c_idx_nlevels > 0
+ assert r_idx_nlevels > 0
+ assert r_idx_type is None or \
+ (r_idx_type in ('i', 'f', 's', 'u', 'dt') and r_idx_nlevels == 1)
+ assert c_idx_type is None or \
+ (c_idx_type in ('i', 'f', 's', 'u', 'dt') and c_idx_nlevels == 1)
+
+ columns = makeCustomIndex(ncols, nlevels=c_idx_nlevels, prefix='C',
+ names=c_idx_names, ndupe_l=c_ndupe_l,
+ idx_type=c_idx_type)
+ index = makeCustomIndex(nrows, nlevels=r_idx_nlevels, prefix='R',
+ names=r_idx_names, ndupe_l=r_ndupe_l,
+ idx_type=r_idx_type)
+
+ # by default, generate data based on location
+ if data_gen_f is None:
+ data_gen_f = lambda r, c: "R%dC%d" % (r,c)
+
+ data = [[data_gen_f(r, c) for c in range(ncols)] for r in range(nrows)]
+
+ return DataFrame(data, index, columns, dtype=dtype)
+
def add_nans(panel):
I, J, N = panel.shape
for i, item in enumerate(panel.items):
| https://api.github.com/repos/pandas-dev/pandas/pulls/2446 | 2012-12-07T00:10:41Z | 2012-12-07T15:40:29Z | null | 2012-12-07T15:40:34Z | |
to_execl xlwt fails on large files | diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 44c7be853b6a0..292602feb01dc 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -21,6 +21,8 @@
import pandas.lib as lib
import pandas._parser as _parser
from pandas.tseries.period import Period
+import json
+
class DateConversionError(Exception):
pass
@@ -1929,7 +1931,7 @@ class CellStyleConverter(object):
"""
@staticmethod
- def to_xls(style_dict):
+ def to_xls(style_dict, num_format_str=None):
"""
converts a style_dict to an xlwt style object
Parameters
@@ -1971,9 +1973,13 @@ def style_to_xlwt(item, firstlevel=True, field_sep=',', line_sep=';'):
if style_dict:
xlwt_stylestr = style_to_xlwt(style_dict)
- return xlwt.easyxf(xlwt_stylestr, field_sep=',', line_sep=';')
+ style = xlwt.easyxf(xlwt_stylestr, field_sep=',', line_sep=';')
else:
- return xlwt.XFStyle()
+ style = xlwt.XFStyle()
+ if num_format_str is not None:
+ style.num_format_str = num_format_str
+
+ return style
@staticmethod
def to_xlsx(style_dict):
@@ -2111,13 +2117,26 @@ def _writecells_xls(self, cells, sheet_name, startrow, startcol):
wks = self.book.add_sheet(sheet_name)
self.sheets[sheet_name] = wks
+ style_dict = {}
+
for cell in cells:
val = _conv_value(cell.val)
- style = CellStyleConverter.to_xls(cell.style)
- if isinstance(val, datetime.datetime):
- style.num_format_str = "YYYY-MM-DD HH:MM:SS"
- elif isinstance(val, datetime.date):
- style.num_format_str = "YYYY-MM-DD"
+
+ num_format_str = None
+ if isinstance(cell.val, datetime.datetime):
+ num_format_str = "YYYY-MM-DD HH:MM:SS"
+ if isinstance(cell.val, datetime.date):
+ num_format_str = "YYYY-MM-DD"
+
+ stylekey = json.dumps(cell.style)
+ if num_format_str:
+ stylekey += num_format_str
+
+ if stylekey in style_dict:
+ style = style_dict[stylekey]
+ else:
+ style = CellStyleConverter.to_xls(cell.style, num_format_str)
+ style_dict[stylekey] = style
if cell.mergestart is not None and cell.mergeend is not None:
wks.write_merge(startrow + cell.row,
| ``` python
import pandas as pd
df = pd.DataFrame([tuple([i for i in range(200)]) for i in range(10000)], columns=["%s" % i for i in range(200)])
df.to_excel('test.xls')
python testlarge.py ⏎
Traceback (most recent call last):
File "testlarge.py", line 9, in <module>
df.to_excel('test.xls')
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/pandas-0.10.0.dev_67ef67f-py2.7-macosx-10.8-x86_64.egg/pandas/core/frame.py", line 1445, in to_excel
startrow=startrow, startcol=startcol)
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/pandas-0.10.0.dev_67ef67f-py2.7-macosx-10.8-x86_64.egg/pandas/io/parsers.py", line 2068, in write_cells
self._writecells_xls(cells, sheet_name, startrow, startcol)
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/pandas-0.10.0.dev_67ef67f-py2.7-macosx-10.8-x86_64.egg/pandas/io/parsers.py", line 2131, in _writecells_xls
val, style)
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/xlwt/Worksheet.py", line 1032, in write
self.row(r).write(c, label, style)
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/xlwt/Row.py", line 236, in write
style_index = self.__parent_wb.add_style(style)
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/xlwt/Workbook.py", line 303, in add_style
return self.__styles.add(style)
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/xlwt/Style.py", line 90, in add
return self._add_style(style)[1]
File "/Volumes/Locodrive/Dev/.virtualenvs/pandasdev/lib/python2.7/site-packages/xlwt/Style.py", line 149, in _add_style
raise ValueError("More than 4094 XFs (styles)")
V
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2445 | 2012-12-06T23:47:23Z | 2012-12-09T18:31:02Z | null | 2014-06-18T17:17:37Z |
repr for wider DataFrames | diff --git a/pandas/core/common.py b/pandas/core/common.py
index d63029b447705..87308b20377b3 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1173,7 +1173,7 @@ def in_interactive_session():
returns True if running under python/ipython interactive shell
"""
import __main__ as main
- return not hasattr(main, '__file__')
+ return not hasattr(main, '__file__') or get_option('test.interactive')
def in_qtconsole():
"""
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index a6739a1c450e9..e38d75a6e0a7b 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -77,6 +77,20 @@
these are generally strings meant to be displayed on the console.
"""
+pc_expand_repr_doc="""
+: boolean
+ Default False
+ Whether to print out the full DataFrame repr for wide DataFrames
+ across multiple lines.
+ If False, the summary representation is shown.
+"""
+
+pc_line_width_doc="""
+: int
+ Default 80
+ When printing wide DataFrames, this is the width of each line.
+"""
+
with cf.config_prefix('print'):
cf.register_option('precision', 7, pc_precision_doc, validator=is_int)
cf.register_option('digits', 7, validator=is_int)
@@ -99,3 +113,13 @@
validator=is_bool)
cf.register_option('encoding', detect_console_encoding(), pc_encoding_doc,
validator=is_text)
+ cf.register_option('expand_frame_repr', False, pc_expand_repr_doc)
+ cf.register_option('line_width', 80, pc_line_width_doc)
+
+tc_interactive_doc="""
+: boolean
+ Default False
+ Whether to simulate interactive mode for purposes of testing
+"""
+with cf.config_prefix('test'):
+ cf.register_option('interactive', False, tc_interactive_doc)
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 554ac41d7bf97..97aa23191adaa 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -186,7 +186,7 @@ class DataFrameFormatter(object):
def __init__(self, frame, buf=None, columns=None, col_space=None,
header=True, index=True, na_rep='NaN', formatters=None,
justify=None, float_format=None, sparsify=None,
- index_names=True, **kwds):
+ index_names=True, line_width=None, **kwds):
self.frame = frame
self.buf = buf if buf is not None else StringIO()
self.show_index_names = index_names
@@ -202,6 +202,7 @@ def __init__(self, frame, buf=None, columns=None, col_space=None,
self.col_space = col_space
self.header = header
self.index = index
+ self.line_width = line_width
if justify is None:
self.justify = get_option("print.colheader_justify")
@@ -282,10 +283,36 @@ def to_string(self, force_unicode=None):
text = info_line
else:
strcols = self._to_str_columns()
- text = adjoin(1, *strcols)
+ if self.line_width is None:
+ text = adjoin(1, *strcols)
+ else:
+ text = self._join_multiline(*strcols)
self.buf.writelines(text)
+ def _join_multiline(self, *strcols):
+ lwidth = self.line_width
+ strcols = list(strcols)
+ if self.index:
+ idx = strcols.pop(0)
+ lwidth -= np.array([len(x) for x in idx]).max()
+
+ col_widths = [np.array([len(x) for x in col]).max()
+ if len(col) > 0 else 0
+ for col in strcols]
+ col_bins = _binify(col_widths, lwidth)
+
+ str_lst = []
+ st = 0
+ for ed in col_bins:
+ row = strcols[st:ed]
+ row.insert(0, idx)
+ if ed <= len(strcols):
+ row.append([' \\'] + [' '] * (len(self.frame) - 1))
+ str_lst.append(adjoin(1, *row))
+ st = ed
+ return '\n\n'.join(str_lst)
+
def to_latex(self, force_unicode=None, column_format=None):
"""
Render a DataFrame to a LaTeX tabular environment output.
@@ -1376,6 +1403,17 @@ def _put_lines(buf, lines):
lines = [unicode(x) for x in lines]
buf.write('\n'.join(lines))
+def _binify(cols, width):
+ bins = []
+ curr_width = 0
+ for i, w in enumerate(cols):
+ curr_width += w
+ if curr_width + 2 > width:
+ bins.append(i)
+ curr_width = w
+ elif i + 1== len(cols):
+ bins.append(i + 1)
+ return bins
if __name__ == '__main__':
arr = np.array([746.03, 0.00, 5620.00, 1592.36])
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1a5f582ebc142..c61c721bf07ef 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -592,10 +592,11 @@ def _need_info_repr_(self):
max_rows = (terminal_height if get_option("print.max_rows") == 0
else get_option("print.max_rows"))
max_columns = get_option("print.max_columns")
+ expand_repr = get_option("print.expand_frame_repr")
if max_columns > 0:
if len(self.index) <= max_rows and \
- len(self.columns) <= max_columns:
+ (len(self.columns) <= max_columns or expand_repr):
return False
else:
return True
@@ -603,14 +604,16 @@ def _need_info_repr_(self):
# save us
if (len(self.index) > max_rows or
(com.in_interactive_session() and
- len(self.columns) > terminal_width // 2)):
+ len(self.columns) > terminal_width // 2 and
+ not expand_repr)):
return True
else:
buf = StringIO()
self.to_string(buf=buf)
value = buf.getvalue()
- if (max([len(l) for l in value.split('\n')]) > terminal_width and
- com.in_interactive_session()):
+ if (max([len(l) for l in value.split('\n')]) > terminal_width
+ and com.in_interactive_session()
+ and not expand_repr):
return True
else:
return False
@@ -646,13 +649,45 @@ def __unicode__(self):
if self._need_info_repr_():
self.info(buf=buf, verbose=self._verbose_info)
else:
- self.to_string(buf=buf)
+ is_wide = self._need_wide_repr()
+ line_width = None
+ if is_wide:
+ line_width = get_option('print.line_width')
+ self.to_string(buf=buf, line_width=line_width)
value = buf.getvalue()
assert type(value) == unicode
return value
+ def _need_wide_repr(self):
+ if com.in_qtconsole():
+ terminal_width, terminal_height = 100, 100
+ else:
+ terminal_width, terminal_height = get_terminal_size()
+ max_columns = get_option("print.max_columns")
+ expand_repr = get_option("print.expand_frame_repr")
+
+ if max_columns > 0:
+ if len(self.columns) > max_columns and expand_repr:
+ return True
+ else:
+ # save us
+ if (com.in_interactive_session() and
+ len(self.columns) > terminal_width // 2 and
+ expand_repr):
+ return True
+ else:
+ buf = StringIO()
+ self.to_string(buf=buf)
+ value = buf.getvalue()
+ if (max([len(l) for l in value.split('\n')]) > terminal_width
+ and com.in_interactive_session()
+ and expand_repr):
+ return True
+
+ return False
+
def __repr__(self):
"""
Return a string representation for a particular DataFrame
@@ -1450,7 +1485,8 @@ def to_excel(self, excel_writer, sheet_name='sheet1', na_rep='',
def to_string(self, buf=None, columns=None, col_space=None, colSpace=None,
header=True, index=True, na_rep='NaN', formatters=None,
float_format=None, sparsify=None, nanRep=None,
- index_names=True, justify=None, force_unicode=None):
+ index_names=True, justify=None, force_unicode=None,
+ line_width=None):
"""
Render a DataFrame to a console-friendly tabular output.
"""
@@ -1476,7 +1512,8 @@ def to_string(self, buf=None, columns=None, col_space=None, colSpace=None,
sparsify=sparsify,
justify=justify,
index_names=index_names,
- header=header, index=index)
+ header=header, index=index,
+ line_width=line_width)
formatter.to_string()
if buf is None:
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 54ad50573b216..e85eaefc7867a 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -400,6 +400,110 @@ def test_frame_info_encoding(self):
repr(df.T)
fmt.set_printoptions(max_rows=200)
+ def test_wide_repr(self):
+ set_option('test.interactive', True)
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ df = DataFrame([col(20, 25) for _ in range(10)])
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ set_option('print.line_width', 120)
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
+
+ set_option('print.expand_frame_repr', False)
+ set_option('test.interactive', False)
+ set_option('print.line_width', 80)
+
+ def test_wide_repr_named(self):
+ set_option('test.interactive', True)
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ df = DataFrame([col(20, 25) for _ in range(10)])
+ df.index.name = 'DataFrame Index'
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ set_option('print.line_width', 120)
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
+
+ for line in wide_repr.splitlines()[1::13]:
+ self.assert_('DataFrame Index' in line)
+
+ set_option('print.expand_frame_repr', False)
+ set_option('test.interactive', False)
+ set_option('print.line_width', 80)
+
+ def test_wide_repr_multiindex(self):
+ set_option('test.interactive', True)
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
+ np.array(col(10, 5))])
+ df = DataFrame([col(20, 25) for _ in range(10)],
+ index=midx)
+ df.index.names = ['Level 0', 'Level 1']
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ set_option('print.line_width', 120)
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
+
+ for line in wide_repr.splitlines()[1::13]:
+ self.assert_('Level 0 Level 1' in line)
+
+ set_option('print.expand_frame_repr', False)
+ set_option('test.interactive', False)
+ set_option('print.line_width', 80)
+
+ def test_wide_repr_multiindex_cols(self):
+ set_option('test.interactive', True)
+ col = lambda l, k: [tm.rands(k) for _ in xrange(l)]
+ midx = pandas.MultiIndex.from_arrays([np.array(col(10, 5)),
+ np.array(col(10, 5))])
+ mcols = pandas.MultiIndex.from_arrays([np.array(col(20, 3)),
+ np.array(col(20, 3))])
+ df = DataFrame([col(20, 25) for _ in range(10)],
+ index=midx, columns=mcols)
+ df.index.names = ['Level 0', 'Level 1']
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ set_option('print.line_width', 120)
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
+
+ self.assert_(len(wide_repr.splitlines()) == 14 * 10 - 1)
+
+ set_option('print.expand_frame_repr', False)
+ set_option('test.interactive', False)
+ set_option('print.line_width', 80)
+
+ def test_wide_repr_unicode(self):
+ set_option('test.interactive', True)
+ col = lambda l, k: [tm.randu(k) for _ in xrange(l)]
+ df = DataFrame([col(20, 25) for _ in range(10)])
+ rep_str = repr(df)
+ set_option('print.expand_frame_repr', True)
+ wide_repr = repr(df)
+ self.assert_(rep_str != wide_repr)
+
+ set_option('print.line_width', 120)
+ wider_repr = repr(df)
+ self.assert_(len(wider_repr) < len(wide_repr))
+
+ set_option('print.expand_frame_repr', False)
+ set_option('test.interactive', False)
+ set_option('print.line_width', 80)
+
def test_to_string(self):
from pandas import read_table
import re
| by default no effect,
set_option('print.expand_frame_repr', True) to have it print out wider DataFrames instead of just info repr.
by default uses 80 characters as the width of the line and can be set like set_option('print.line_width', 120)
also adds set_option('test.interactive', False) option for testing purposes
| https://api.github.com/repos/pandas-dev/pandas/pulls/2436 | 2012-12-06T01:22:47Z | 2012-12-07T23:09:01Z | 2012-12-07T23:09:01Z | 2014-06-14T05:45:22Z |
Fix issue compiling with MSVC9 | diff --git a/pandas/src/parser/tokenizer.h b/pandas/src/parser/tokenizer.h
index a68909abb6c89..f3a8fcae4cdbd 100644
--- a/pandas/src/parser/tokenizer.h
+++ b/pandas/src/parser/tokenizer.h
@@ -24,7 +24,7 @@ See LICENSE for the license
#define ERROR_MINUS_SIGN 4
#if defined(_MSC_VER)
-#include "../ms_stdint.h"
+#include "../headers/ms_stdint.h"
#else
#include <stdint.h>
#endif
diff --git a/pandas/src/period.h b/pandas/src/period.h
index 1c58eaed530d2..4ba92bf8fde41 100644
--- a/pandas/src/period.h
+++ b/pandas/src/period.h
@@ -9,7 +9,7 @@
#include <Python.h>
#include "numpy/ndarraytypes.h"
-#include "stdint.h"
+#include "headers/stdint.h"
#include "limits.h"
/*
| https://api.github.com/repos/pandas-dev/pandas/pulls/2432 | 2012-12-05T02:06:18Z | 2012-12-06T22:20:11Z | null | 2012-12-06T22:20:11Z | |
Update to gotchas page for indexing | diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index 0f7be37c5e83c..c951e5f18a112 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -238,6 +238,26 @@ be possible to insert some logic to check whether a passed sequence is all
contained in the index, that logic would exact a very high cost in large data
sets.
+Reindex potentially changes underlying Series dtype
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The use of ``reindex_like`` can potentially change the dtype of a ``Series``.
+
+.. code-block:: python
+
+ series = pandas.Series([1, 2, 3])
+ x = pandas.Series([True])
+ x.dtype
+ x = pandas.Series([True]).reindex_like(series)
+ x.dtype
+
+This is because ``reindex_like`` silently inserts ``NaNs`` and the ``dtype``
+changes accordingly. This can cause some issues when using ``numpy`` ``ufuncs``
+such as ``numpy.logical_and``.
+
+See the `this old issue <https://github.com/pydata/pandas/issues/2388>`__ for a more
+detailed discussion.
+
Timestamp limitations
---------------------
| Just a small update to the gotchas page to reference a potential
gotcha using the reindex_like functionality.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2429 | 2012-12-04T22:32:26Z | 2012-12-07T15:41:51Z | null | 2012-12-07T15:41:51Z |
ENH: value_vars for melt | diff --git a/RELEASE.rst b/RELEASE.rst
index 9d62d9bd978ee..704eeebc7cdc8 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -82,6 +82,7 @@ pandas 0.10.0
- ``min_itemsize`` parameter can be specified in ``HDFStore table`` creation
- Indexing support in ``HDFStore tables`` (#698)
- Add `line_terminator` option to DataFrame.to_csv (#2383)
+ - Implement ``value_vars`` in ``melt`` and add ``melt`` to pandas namespace (#2412)
**Bug fixes**
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index c4da3d1f6b3cc..d5b710a1aefe3 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -74,7 +74,7 @@ Updated PyTables Support
# remove all nodes under this level
store.remove('food')
- store
+ store
- added mixed-dtype support!
@@ -110,6 +110,8 @@ Updated PyTables Support
import os
os.remove('store.h5')
+ - Implement ``value_vars`` in ``melt`` and add ``melt`` to pandas namespace (GH2412_)
+
API changes
~~~~~~~~~~~
@@ -156,3 +158,4 @@ on GitHub for a complete list.
.. _GH2316: https://github.com/pydata/pandas/issues/2316
.. _GH2097: https://github.com/pydata/pandas/issues/2097
.. _GH2224: https://github.com/pydata/pandas/issues/2224
+.. _GH2412: https://github.com/pydata/pandas/issues/2412
diff --git a/pandas/__init__.py b/pandas/__init__.py
index e868516fcac4d..dcecc1ccd408d 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -40,3 +40,4 @@
from pandas.tools.pivot import pivot_table, crosstab
from pandas.tools.plotting import scatter_matrix, plot_params
from pandas.tools.tile import cut, qcut
+from pandas.core.reshape import melt
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 5b82c13d9c7fa..99c5034dd03b9 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -528,29 +528,34 @@ def melt(frame, id_vars=None, value_vars=None):
b 3 4
c 5 6
- >>> melt(df, id_vars=['A'])
+ >>> melt(df, id_vars=['A'], value_vars=['B'])
A variable value
a B 1
b B 3
c B 5
- a C 2
- b C 4
- c C 6
"""
# TODO: what about the existing index?
+ if id_vars is not None:
+ if not isinstance(id_vars, (tuple, list, np.ndarray)):
+ id_vars = [id_vars]
+ else:
+ id_vars = list(id_vars)
+ else:
+ id_vars = []
+
+ if value_vars is not None:
+ if not isinstance(value_vars, (tuple, list, np.ndarray)):
+ value_vars = [value_vars]
+ frame = frame.ix[:, id_vars + value_vars]
+ else:
+ frame = frame.copy()
N, K = frame.shape
+ K -= len(id_vars)
mdata = {}
-
- if id_vars is not None:
- id_vars = list(id_vars)
- frame = frame.copy()
- K -= len(id_vars)
- for col in id_vars:
- mdata[col] = np.tile(frame.pop(col).values, K)
- else:
- id_vars = []
+ for col in id_vars:
+ mdata[col] = np.tile(frame.pop(col).values, K)
mcolumns = id_vars + ['variable', 'value']
diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py
index ee6bf7bb078fb..a921518c25c95 100644
--- a/pandas/tests/test_reshape.py
+++ b/pandas/tests/test_reshape.py
@@ -27,6 +27,10 @@ def test_melt():
molten1 = melt(df)
molten2 = melt(df, id_vars=['id1'])
molten3 = melt(df, id_vars=['id1', 'id2'])
+ molten4 = melt(df, id_vars=['id1', 'id2'],
+ value_vars='A')
+ molten5 = melt(df, id_vars=['id1', 'id2'],
+ value_vars=['A', 'B'])
def test_convert_dummies():
df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
diff --git a/vb_suite/reshape.py b/vb_suite/reshape.py
index 7c94bcfe44247..a27073c392efc 100644
--- a/vb_suite/reshape.py
+++ b/vb_suite/reshape.py
@@ -49,3 +49,14 @@ def unpivot(frame):
unstack_sparse_keyspace = Benchmark('idf.unstack()', setup,
start_date=datetime(2011, 10, 1))
+# Melt
+
+setup = common_setup + """
+from pandas.core.reshape import melt
+df = DataFrame(np.random.randn(10000, 3), columns=['A', 'B', 'C'])
+df['id1'] = np.random.randint(0, 10, 10000)
+df['id2'] = np.random.randint(100, 1000, 10000)
+"""
+
+melt_dataframe = Benchmark("melt(df, id_vars=['id1', 'id2'])", setup,
+ start_date=datetime(2012, 8, 1))
| #2412
| https://api.github.com/repos/pandas-dev/pandas/pulls/2415 | 2012-12-03T18:31:30Z | 2012-12-07T15:59:46Z | null | 2014-06-27T17:54:46Z |
ENH: args for strip/lstrip/rstrip #2411 | diff --git a/.gitignore b/.gitignore
index eb26b3cedc724..fbf28ba50f2c5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -25,4 +25,5 @@ pandas.egg-info
*\#*\#
.tox
pandas/io/*.dat
-pandas/io/*.json
\ No newline at end of file
+pandas/io/*.json
+*.log
\ No newline at end of file
diff --git a/RELEASE.rst b/RELEASE.rst
index 9d62d9bd978ee..aa1cbfbaecf6a 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -82,6 +82,7 @@ pandas 0.10.0
- ``min_itemsize`` parameter can be specified in ``HDFStore table`` creation
- Indexing support in ``HDFStore tables`` (#698)
- Add `line_terminator` option to DataFrame.to_csv (#2383)
+ - Enable ``Series.str.strip/lstrip/rstrip`` methods to take an argument (#2411)
**Bug fixes**
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index c4da3d1f6b3cc..0448e2acb6f78 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -74,7 +74,7 @@ Updated PyTables Support
# remove all nodes under this level
store.remove('food')
- store
+ store
- added mixed-dtype support!
@@ -110,6 +110,8 @@ Updated PyTables Support
import os
os.remove('store.h5')
+- Enable ``Series.str.strip/lstrip/rstrip`` methods to take an argument (GH2411_)
+
API changes
~~~~~~~~~~~
@@ -156,3 +158,4 @@ on GitHub for a complete list.
.. _GH2316: https://github.com/pydata/pandas/issues/2316
.. _GH2097: https://github.com/pydata/pandas/issues/2097
.. _GH2224: https://github.com/pydata/pandas/issues/2224
+.. _GH2411: https://github.com/pydata/pandas/issues/2411
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 469804ecca67e..1493f41358ea9 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -451,39 +451,51 @@ def str_slice_replace(arr, start=None, stop=None, repl=None):
raise NotImplementedError
-def str_strip(arr):
+def str_strip(arr, to_strip=None):
"""
Strip whitespace (including newlines) from each string in the array
+ Parameters
+ ----------
+ to_strip : str or unicode
+
Returns
-------
stripped : array
"""
- return _na_map(lambda x: x.strip(), arr)
+ return _na_map(lambda x: x.strip(to_strip), arr)
-def str_lstrip(arr):
+def str_lstrip(arr, to_strip=None):
"""
Strip whitespace (including newlines) from left side of each string in the
array
+ Parameters
+ ----------
+ to_strip : str or unicode
+
Returns
-------
stripped : array
"""
- return _na_map(lambda x: x.lstrip(), arr)
+ return _na_map(lambda x: x.lstrip(to_strip), arr)
-def str_rstrip(arr):
+def str_rstrip(arr, to_strip=None):
"""
Strip whitespace (including newlines) from right side of each string in the
array
+ Parameters
+ ----------
+ to_strip : str or unicode
+
Returns
-------
stripped : array
"""
- return _na_map(lambda x: x.rstrip(), arr)
+ return _na_map(lambda x: x.rstrip(to_strip), arr)
def str_wrap(arr, width=80):
@@ -686,6 +698,21 @@ def encode(self, encoding, errors="strict"):
result = str_encode(self.series, encoding, errors)
return self._wrap_result(result)
+ @copy(str_strip)
+ def strip(self, to_strip=None):
+ result = str_strip(self.series, to_strip)
+ return self._wrap_result(result)
+
+ @copy(str_lstrip)
+ def lstrip(self, to_strip=None):
+ result = str_lstrip(self.series, to_strip)
+ return self._wrap_result(result)
+
+ @copy(str_rstrip)
+ def rstrip(self, to_strip=None):
+ result = str_rstrip(self.series, to_strip)
+ return self._wrap_result(result)
+
count = _pat_wrapper(str_count, flags=True)
startswith = _pat_wrapper(str_startswith, na=True)
endswith = _pat_wrapper(str_endswith, na=True)
@@ -693,8 +720,5 @@ def encode(self, encoding, errors="strict"):
match = _pat_wrapper(str_match, flags=True)
len = _noarg_wrapper(str_len)
- strip = _noarg_wrapper(str_strip)
- rstrip = _noarg_wrapper(str_rstrip)
- lstrip = _noarg_wrapper(str_lstrip)
lower = _noarg_wrapper(str_lower)
upper = _noarg_wrapper(str_upper)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index beebb22c18b86..a9b5b908dd946 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -539,6 +539,7 @@ def test_strip_lstrip_rstrip(self):
exp = Series([' aa', ' bb', NA, 'cc'])
tm.assert_series_equal(result, exp)
+ def test_strip_lstrip_rstrip_mixed(self):
#mixed
mixed = Series([' aa ', NA, ' bb \t\n', True, datetime.today(),
None, 1, 2.])
@@ -564,6 +565,7 @@ def test_strip_lstrip_rstrip(self):
self.assert_(isinstance(rs, Series))
tm.assert_almost_equal(rs, xp)
+ def test_strip_lstrip_rstrip_unicode(self):
#unicode
values = Series([u' aa ', u' bb \n', NA, u'cc '])
@@ -579,6 +581,36 @@ def test_strip_lstrip_rstrip(self):
exp = Series([u' aa', u' bb', NA, u'cc'])
tm.assert_series_equal(result, exp)
+ def test_strip_lstrip_rstrip_args(self):
+ values = Series(['xxABCxx', 'xx BNSD', 'LDFJH xx'])
+
+ rs = values.str.strip('x')
+ xp = Series(['ABC', ' BNSD', 'LDFJH '])
+ assert_series_equal(rs, xp)
+
+ rs = values.str.lstrip('x')
+ xp = Series(['ABCxx', ' BNSD', 'LDFJH xx'])
+ assert_series_equal(rs, xp)
+
+ rs = values.str.rstrip('x')
+ xp = Series(['xxABC', 'xx BNSD', 'LDFJH '])
+ assert_series_equal(rs, xp)
+
+ def test_strip_lstrip_rstrip_args_unicode(self):
+ values = Series([u'xxABCxx', u'xx BNSD', u'LDFJH xx'])
+
+ rs = values.str.strip(u'x')
+ xp = Series(['ABC', ' BNSD', 'LDFJH '])
+ assert_series_equal(rs, xp)
+
+ rs = values.str.lstrip(u'x')
+ xp = Series(['ABCxx', ' BNSD', 'LDFJH xx'])
+ assert_series_equal(rs, xp)
+
+ rs = values.str.rstrip(u'x')
+ xp = Series(['xxABC', 'xx BNSD', 'LDFJH '])
+ assert_series_equal(rs, xp)
+
def test_wrap(self):
pass
| https://api.github.com/repos/pandas-dev/pandas/pulls/2413 | 2012-12-03T17:51:45Z | 2012-12-07T16:28:55Z | null | 2012-12-07T16:28:55Z | |
Resample interval | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 79493b26c0ba6..57914aff31826 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -203,7 +203,7 @@ def between_time(self, start_time, end_time, include_start=True,
raise TypeError('Index must be DatetimeIndex')
def resample(self, rule, how=None, axis=0, fill_method=None,
- closed='right', label='right', convention='start',
+ closed=None, label=None, convention='start',
kind=None, loffset=None, limit=None, base=0):
"""
Convenience method for frequency conversion and resampling of regular
@@ -216,9 +216,9 @@ def resample(self, rule, how=None, axis=0, fill_method=None,
downsampling
fill_method : string, fill_method for upsampling, default None
axis : int, optional, default 0
- closed : {'right', 'left'}, default 'right'
+ closed : {'right', 'left'}, default None
Which side of bin interval is closed
- label : {'right', 'left'}, default 'right'
+ label : {'right', 'left'}, default None
Which bin edge label to label bucket with
convention : {'start', 'end', 's', 'e'}
loffset : timedelta
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index d1ce5d8aa1758..39cbb26f4f255 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -35,11 +35,26 @@ class TimeGrouper(CustomGrouper):
Use begin, end, nperiods to generate intervals that cannot be derived
directly from the associated object
"""
- def __init__(self, freq='Min', closed='right', label='right', how='mean',
+ def __init__(self, freq='Min', closed=None, label=None, how='mean',
nperiods=None, axis=0,
fill_method=None, limit=None, loffset=None, kind=None,
convention=None, base=0):
self.freq = to_offset(freq)
+
+ end_types = set(['M', 'A', 'Q', 'BM', 'BA', 'BQ', 'W'])
+ rule = self.freq.rule_code
+ if (rule in end_types or
+ ('-' in rule and rule[:rule.find('-')] in end_types)):
+ if closed is None:
+ closed = 'right'
+ if label is None:
+ label = 'right'
+ else:
+ if closed is None:
+ closed = 'left'
+ if label is None:
+ label = 'left'
+
self.closed = closed
self.label = label
self.nperiods = nperiods
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 5e954179127bf..dee9a9ed75d75 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -16,7 +16,8 @@
import unittest
import nose
-from pandas.util.testing import assert_series_equal, assert_almost_equal
+from pandas.util.testing import (assert_series_equal, assert_almost_equal,
+ assert_frame_equal)
import pandas.util.testing as tm
bday = BDay()
@@ -99,10 +100,11 @@ def test_resample_basic(self):
s = self.series
result = s.resample('5Min', how='last')
- grouper = TimeGrouper(Minute(5), closed='right', label='right')
+ grouper = TimeGrouper(Minute(5), closed='left', label='left')
expect = s.groupby(grouper).agg(lambda x: x[-1])
assert_series_equal(result, expect)
+ def test_resample_basic_from_daily(self):
# from daily
dti = DatetimeIndex(start=datetime(2005,1,1), end=datetime(2005,1,10),
freq='D', name='index')
@@ -150,11 +152,11 @@ def test_resample_basic(self):
# to biz day
result = s.resample('B', how='last')
- self.assertEquals(len(result), 6)
- self.assert_((result.index.dayofweek == [0,1,2,3,4,0]).all())
- self.assertEquals(result.irow(0), s['1/3/2005'])
- self.assertEquals(result.irow(1), s['1/4/2005'])
- self.assertEquals(result.irow(5), s['1/10/2005'])
+ self.assertEquals(len(result), 7)
+ self.assert_((result.index.dayofweek == [4,0,1,2,3,4,0]).all())
+ self.assertEquals(result.irow(0), s['1/2/2005'])
+ self.assertEquals(result.irow(1), s['1/3/2005'])
+ self.assertEquals(result.irow(5), s['1/9/2005'])
self.assert_(result.index.name == 'index')
def test_resample_frame_basic(self):
@@ -234,30 +236,30 @@ def test_upsample_with_limit(self):
def test_resample_ohlc(self):
s = self.series
- grouper = TimeGrouper(Minute(5), closed='right', label='right')
+ grouper = TimeGrouper(Minute(5))
expect = s.groupby(grouper).agg(lambda x: x[-1])
result = s.resample('5Min', how='ohlc')
self.assertEquals(len(result), len(expect))
self.assertEquals(len(result.columns), 4)
- xs = result.irow(-1)
- self.assertEquals(xs['open'], s[-5])
- self.assertEquals(xs['high'], s[-5:].max())
- self.assertEquals(xs['low'], s[-5:].min())
- self.assertEquals(xs['close'], s[-1])
+ xs = result.irow(-2)
+ self.assertEquals(xs['open'], s[-6])
+ self.assertEquals(xs['high'], s[-6:-1].max())
+ self.assertEquals(xs['low'], s[-6:-1].min())
+ self.assertEquals(xs['close'], s[-2])
- xs = result.irow(1)
- self.assertEquals(xs['open'], s[1])
- self.assertEquals(xs['high'], s[1:6].max())
- self.assertEquals(xs['low'], s[1:6].min())
- self.assertEquals(xs['close'], s[5])
+ xs = result.irow(0)
+ self.assertEquals(xs['open'], s[0])
+ self.assertEquals(xs['high'], s[:5].max())
+ self.assertEquals(xs['low'], s[:5].min())
+ self.assertEquals(xs['close'], s[4])
def test_resample_reresample(self):
dti = DatetimeIndex(start=datetime(2005,1,1), end=datetime(2005,1,10),
freq='D')
s = Series(np.random.rand(len(dti)), dti)
- bs = s.resample('B')
+ bs = s.resample('B', closed='right', label='right')
result = bs.resample('8H')
self.assertEquals(len(result), 22)
self.assert_(isinstance(result.index.freq, offsets.DateOffset))
@@ -296,7 +298,8 @@ def _ohlc(group):
freq='10s')
ts = Series(np.random.randn(len(rng)), index=rng)
- resampled = ts.resample('5min', how='ohlc')
+ resampled = ts.resample('5min', how='ohlc', closed='right',
+ label='right')
self.assert_((resampled.ix['1/1/2000 00:00'] == ts[0]).all())
@@ -394,7 +397,7 @@ def test_resample_base(self):
ts = Series(np.random.randn(len(rng)), index=rng)
resampled = ts.resample('5min', base=2)
- exp_rng = date_range('1/1/2000 00:02:00', '1/1/2000 02:02',
+ exp_rng = date_range('12/31/1999 23:57:00', '1/1/2000 01:57',
freq='5min')
self.assert_(resampled.index.equals(exp_rng))
@@ -785,7 +788,7 @@ def test_resample_irregular_sparse(self):
dr = date_range(start='1/1/2012', freq='5min', periods=1000)
s = Series(np.array(100), index=dr)
# subset the data.
- subset = s[:'2012-01-04 07:00']
+ subset = s[:'2012-01-04 06:55']
result = subset.resample('10min', how=len)
expected = s.resample('10min', how=len).ix[result.index]
@@ -828,7 +831,7 @@ def test_resample_tz_localized(self):
tz='Australia/Sydney')
s = Series([1,2], index=idx)
- result = s.resample('D')
+ result = s.resample('D', closed='right', label='right')
ex_index = date_range('2001-09-21', periods=1, freq='D',
tz='Australia/Sydney')
expected = Series([1.5], index=ex_index)
@@ -892,6 +895,33 @@ def test_resample_weekly_bug_1726(self):
# assert_series_equal(result, expected)
+ def test_default_right_closed_label(self):
+ end_freq = ['D', 'Q', 'M', 'D']
+ end_types = ['M', 'A', 'Q', 'W']
+
+ for from_freq, to_freq in zip(end_freq, end_types):
+ idx = DatetimeIndex(start='8/15/2012', periods=100,
+ freq=from_freq)
+ df = DataFrame(np.random.randn(len(idx), 2), idx)
+
+ resampled = df.resample(to_freq)
+ assert_frame_equal(resampled, df.resample(to_freq, closed='right',
+ label='right'))
+
+ def test_default_left_closed_label(self):
+ others = ['MS', 'AS', 'QS', 'D', 'H']
+ others_freq = ['D', 'Q', 'M', 'H', 'T']
+
+ for from_freq, to_freq in zip(others_freq, others):
+ idx = DatetimeIndex(start='8/15/2012', periods=100,
+ freq=from_freq)
+ df = DataFrame(np.random.randn(len(idx), 2), idx)
+
+ resampled = df.resample(to_freq)
+ assert_frame_equal(resampled, df.resample(to_freq, closed='left',
+ label='left'))
+
+
class TestTimeGrouper(unittest.TestCase):
def setUp(self):
| inference for close and label parameter values
| https://api.github.com/repos/pandas-dev/pandas/pulls/2410 | 2012-12-03T14:54:54Z | 2012-12-07T15:44:46Z | 2012-12-07T15:44:46Z | 2014-06-23T06:30:15Z |
Index format | diff --git a/pandas/core/format.py b/pandas/core/format.py
index f62e68a0f18a1..554ac41d7bf97 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -99,6 +99,7 @@ def _get_footer(self):
def _get_formatted_index(self):
index = self.series.index
is_multi = isinstance(index, MultiIndex)
+
if is_multi:
have_header = any(name for name in index.names)
fmt_index = index.format(names=True)
@@ -391,11 +392,13 @@ def _get_formatted_index(self):
show_index_names = self.show_index_names and self.has_index_names
show_col_names = (self.show_index_names and self.has_column_names)
+ fmt = self.formatters.get('__index__', None)
if isinstance(index, MultiIndex):
fmt_index = index.format(sparsify=self.sparsify, adjoin=False,
- names=show_index_names)
+ names=show_index_names,
+ formatter=fmt)
else:
- fmt_index = [index.format(name=show_index_names)]
+ fmt_index = [index.format(name=show_index_names, formatter=fmt)]
adjoined = adjoin(1, *fmt_index).split('\n')
@@ -622,7 +625,7 @@ def _write_regular_rows(self, fmt_values, indent):
if '__index__' in self.fmt.formatters:
f = self.fmt.formatters['__index__']
- index_values = self.frame.index.values.map(f)
+ index_values = self.frame.index.map(f)
else:
index_values = self.frame.index.format()
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 9f0992908a3a7..bc4f72f1449ff 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -421,7 +421,7 @@ def take(self, indexer, axis=0):
taken = self.view(np.ndarray).take(indexer)
return self._constructor(taken, name=self.name)
- def format(self, name=False):
+ def format(self, name=False, formatter=None):
"""
Render a string representation of the Index
"""
@@ -429,7 +429,11 @@ def format(self, name=False):
header = []
if name:
- header.append(com.pprint_thing(self.name) if self.name is not None else '')
+ header.append(com.pprint_thing(self.name)
+ if self.name is not None else '')
+
+ if formatter is not None:
+ return header + list(self.map(formatter))
if self.is_all_dates:
zero_time = time(0, 0)
@@ -1559,14 +1563,14 @@ def get_level_values(self, level):
return unique_vals.take(labels)
def format(self, space=2, sparsify=None, adjoin=True, names=False,
- na_rep='NaN'):
+ na_rep='NaN', formatter=None):
if len(self) == 0:
return []
stringified_levels = []
for lev, lab in zip(self.levels, self.labels):
if len(lev) > 0:
- formatted = lev.take(lab).format()
+ formatted = lev.take(lab).format(formatter=formatter)
else:
# weird all NA case
formatted = [com.pprint_thing(x) for x in com.take_1d(lev.values, lab)]
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index c93ecb5914ff8..54ad50573b216 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -328,6 +328,46 @@ def test_to_html_multiindex_sparsify(self):
</table>"""
self.assertEquals(result, expected)
+ def test_to_html_index_formatter(self):
+ df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]],
+ columns=['foo', None], index=range(4))
+
+ f = lambda x: 'abcd'[x]
+ result = df.to_html(formatters={'__index__': f})
+ expected = """\
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>foo</th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td><strong>a</strong></td>
+ <td> 0</td>
+ <td> 1</td>
+ </tr>
+ <tr>
+ <td><strong>b</strong></td>
+ <td> 2</td>
+ <td> 3</td>
+ </tr>
+ <tr>
+ <td><strong>c</strong></td>
+ <td> 4</td>
+ <td> 5</td>
+ </tr>
+ <tr>
+ <td><strong>d</strong></td>
+ <td> 6</td>
+ <td> 7</td>
+ </tr>
+ </tbody>
+</table>"""
+ self.assertEquals(result, expected)
+
def test_nonunicode_nonascii_alignment(self):
df = DataFrame([["aa\xc3\xa4\xc3\xa4", 1], ["bbbb", 2]])
@@ -539,6 +579,19 @@ def test_to_string_int_formatting(self):
'3 -35')
self.assertEqual(output, expected)
+ def test_to_string_index_formatter(self):
+ df = DataFrame([range(5), range(5, 10), range(10, 15)])
+
+ rs = df.to_string(formatters={'__index__': lambda x: 'abc'[x]})
+
+ xp = """\
+ 0 1 2 3 4
+a 0 1 2 3 4
+b 5 6 7 8 9
+c 10 11 12 13 14\
+"""
+ self.assertEqual(rs, xp)
+
def test_to_string_left_justify_cols(self):
fmt.reset_printoptions()
df = DataFrame({'x' : [3234, 0.253]})
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 18022cf594e63..8340d55604e24 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -145,6 +145,8 @@ def wrapper(x):
class SafeForSparse(object):
+ _multiprocess_can_split_ = True
+
@classmethod
def assert_panel_equal(cls, x, y):
assert_panel_equal(x, y)
@@ -193,37 +195,6 @@ def test_get_axis_name(self):
self.assertEqual(self.panel4d._get_axis_name(2), 'major_axis')
self.assertEqual(self.panel4d._get_axis_name(3), 'minor_axis')
- #def test_get_plane_axes(self):
- # # what to do here?
-
- # index, columns = self.panel._get_plane_axes('items')
- # index, columns = self.panel._get_plane_axes('major_axis')
- # index, columns = self.panel._get_plane_axes('minor_axis')
- # index, columns = self.panel._get_plane_axes(0)
-
- def test_truncate(self):
- raise nose.SkipTest
-
- #dates = self.panel.major_axis
- #start, end = dates[1], dates[5]
-
- #trunced = self.panel.truncate(start, end, axis='major')
- #expected = self.panel['ItemA'].truncate(start, end)
-
- #assert_frame_equal(trunced['ItemA'], expected)
-
- #trunced = self.panel.truncate(before=start, axis='major')
- #expected = self.panel['ItemA'].truncate(before=start)
-
- #assert_frame_equal(trunced['ItemA'], expected)
-
- #trunced = self.panel.truncate(after=end, axis='major')
- #expected = self.panel['ItemA'].truncate(after=end)
-
- #assert_frame_equal(trunced['ItemA'], expected)
-
- # XXX test other axes
-
def test_arith(self):
self._test_op(self.panel4d, operator.add)
self._test_op(self.panel4d, operator.sub)
@@ -317,6 +288,7 @@ def test_abs(self):
class CheckIndexing(object):
+ _multiprocess_can_split_ = True
def test_getitem(self):
self.assertRaises(Exception, self.panel4d.__getitem__, 'ItemQ')
@@ -396,22 +368,6 @@ def test_setitem(self):
self.panel4d['lP'] = self.panel4d['l1'] > 0
self.assert_(self.panel4d['lP'].values.dtype == np.bool_)
- def test_setitem_ndarray(self):
- raise nose.SkipTest
- # from pandas import DateRange, datetools
-
- # timeidx = DateRange(start=datetime(2009,1,1),
- # end=datetime(2009,12,31),
- # offset=datetools.MonthEnd())
- # lons_coarse = np.linspace(-177.5, 177.5, 72)
- # lats_coarse = np.linspace(-87.5, 87.5, 36)
- # P = Panel(items=timeidx, major_axis=lons_coarse, minor_axis=lats_coarse)
- # data = np.random.randn(72*36).reshape((72,36))
- # key = datetime(2009,2,28)
- # P[key] = data#
-
- # assert_almost_equal(P[key].values, data)
-
def test_major_xs(self):
ref = self.panel4d['l1']['ItemA']
@@ -514,38 +470,6 @@ def test_getitem_fancy_xs(self):
#self.assertRaises(NotImplementedError, self.panel4d.major_xs)
#self.assertRaises(NotImplementedError, self.panel4d.minor_xs)
- def test_getitem_fancy_xs_check_view(self):
- raise nose.SkipTest
- # item = 'ItemB'
- # date = self.panel.major_axis[5]
- # col = 'C'
-
- # # make sure it's always a view
- # NS = slice(None, None)
-
- # # DataFrames
- # comp = assert_frame_equal
- # self._check_view(item, comp)
- # self._check_view((item, NS), comp)
- # self._check_view((item, NS, NS), comp)
- # self._check_view((NS, date), comp)
- # self._check_view((NS, date, NS), comp)
- # self._check_view((NS, NS, 'C'), comp)
-
- # # Series
- # comp = assert_series_equal
- # self._check_view((item, date), comp)
- # self._check_view((item, date, NS), comp)
- # self._check_view((item, NS, 'C'), comp)
- # self._check_view((NS, date, 'C'), comp)#
-
- #def _check_view(self, indexer, comp):
- # cp = self.panel.copy()
- # obj = cp.ix[indexer]
- # obj.values[:] = 0
- # self.assert_((obj.values == 0).all())
- # comp(cp.ix[indexer].reindex_like(obj), obj)
-
def test_get_value(self):
for label in self.panel4d.labels:
for item in self.panel4d.items:
@@ -574,6 +498,8 @@ def test_set_value(self):
class TestPanel4d(unittest.TestCase, CheckIndexing, SafeForSparse, SafeForLongAndSparse):
+ _multiprocess_can_split_ = True
+
@classmethod
def assert_panel4d_equal(cls,x, y):
assert_panel4d_equal(x, y)
| e.g., formatters={'**index**' : <func>}
| https://api.github.com/repos/pandas-dev/pandas/pulls/2409 | 2012-12-03T13:51:16Z | 2012-12-03T14:28:34Z | 2012-12-03T14:28:34Z | 2012-12-03T18:01:20Z |
BLD: New build matrix for Travis | diff --git a/.travis.yml b/.travis.yml
index ee60f9d7ad713..0d57f69597477 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -3,37 +3,50 @@ language: python
python:
- 2.6
- 2.7
- - 3.1
+ - 3.1 # travis will soon EOL this
- 3.2
- 3.3
+env:
+ global:
+ - # need at least this so travis page will show env column
+
matrix:
include:
- python: 2.7
env: VBENCH=true
+ - python: 2.7
+ env: FULL_DEPS=true
+ - python: 3.2
+ env: FULL_DEPS=true
allow_failures:
- - python: 3.3 # until travis upgrade to 1.8.4
- python: 2.7
env: VBENCH=true
+ - python: 3.2
+ env: FULL_DEPS=true
-install:
- - virtualenv --version
+# allow importing from site-packages,
+# so apt-get python-x works for system pythons
+# That's 2.7/3.2 on Ubuntu 12.04
+virtualenv:
+ system_site_packages: true
+
+before_install:
+ - echo "Waldo1"
+ - echo $VIRTUAL_ENV
- date
- - whoami
- - pwd
- - echo $VBENCH
- # install 1.7.0b2 for 3.3, and pull a version of numpy git master
- # with a alternate fix for detach bug as a temporary workaround
- # for the others.
- - 'if [ $TRAVIS_PYTHON_VERSION == "3.3" ]; then pip uninstall numpy; pip install https://github.com/numpy/numpy/archive/v1.7.0b2.tar.gz; fi'
- - 'if [ $TRAVIS_PYTHON_VERSION == "3.2" ] || [ $TRAVIS_PYTHON_VERSION == "3.1" ]; then pip install https://github.com/y-p/numpy/archive/1.6.2_with_travis_fix.tar.gz; fi'
- - 'if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ]; then pip install numpy; fi' # should be nop if pre-installed
- - pip install --use-mirrors cython nose pytz python-dateutil xlrd openpyxl;
+ - export PIP_ARGS=-q # comment this this to debug travis install issues
+ - export APT_ARGS=-qq # comment this to debug travis install issues
+ # - set -x # enable this to see bash commands
+ - source ci/before_install.sh # we need to source this to bring in the env
+ - python -V
+
+install:
+ - echo "Waldo2"
+ - ci/install.sh
+ - ci/print_versions.py
script:
- - python setup.py build_ext install
- - 'if [ x"$VBENCH" != x"true" ]; then nosetests --exe -w /tmp -A "not slow" pandas; fi'
- - pwd
- - 'if [ x"$VBENCH" == x"true" ]; then pip install sqlalchemy git+git://github.com/pydata/vbench.git; python vb_suite/perf_HEAD.py; fi'
- - python -c "import numpy;print(numpy.version.version)"
+ - echo "Waldo3"
+ - ci/script.sh
diff --git a/ci/before_install.sh b/ci/before_install.sh
new file mode 100755
index 0000000000000..7b7919a41ba4e
--- /dev/null
+++ b/ci/before_install.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+
+echo "inside $0"
+
+# overview
+if [ ${TRAVIS_PYTHON_VERSION} == "3.3" ]; then
+ sudo add-apt-repository -y ppa:doko/ppa # we get the py3.3 debs from here
+fi
+
+sudo apt-get update $APT_ARGS # run apt-get update for all versions
+
+# hack for broken 3.3 env
+if [ x"$VIRTUAL_ENV" == x"" ]; then
+ VIRTUAL_ENV=~/virtualenv/python$TRAVIS_PYTHON_VERSION_with_system_site_packages;
+fi
+
+# we only recreate the virtualenv for 3.x
+# since the "Detach bug" only affects python3
+# and travis has numpy preinstalled on 2.x which is quicker
+_VENV=$VIRTUAL_ENV # save it
+if [ ${TRAVIS_PYTHON_VERSION:0:1} == "3" ] ; then
+ deactivate # pop out of any venv
+ sudo pip install virtualenv==1.8.4 --upgrade
+ sudo apt-get install $APT_ARGS python3.3 python3.3-dev
+ sudo rm -Rf $_VENV
+ virtualenv -p python$TRAVIS_PYTHON_VERSION $_VENV --system-site-packages;
+ source $_VENV/bin/activate
+fi
diff --git a/ci/install.sh b/ci/install.sh
new file mode 100755
index 0000000000000..139382960b81a
--- /dev/null
+++ b/ci/install.sh
@@ -0,0 +1,46 @@
+#!/bin/bash
+
+echo "inside $0"
+# Install Dependencies
+
+# Hard Deps
+pip install $PIP_ARGS --use-mirrors cython nose python-dateutil
+
+# try and get numpy as a binary deb
+
+# numpy is preinstalled on 2.x
+# if [ ${TRAVIS_PYTHON_VERSION} == "2.7" ]; then
+# sudo apt-get $APT_ARGS install python-numpy;
+# fi
+
+if [ ${TRAVIS_PYTHON_VERSION} == "3.2" ]; then
+ sudo apt-get $APT_ARGS install python3-numpy;
+fi
+
+# or else, get it with pip and compile it
+if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ] || \
+ [ ${TRAVIS_PYTHON_VERSION} == "3.1" ] || \
+ [ ${TRAVIS_PYTHON_VERSION} == "3.2" ]; then
+ pip $PIP_ARGS install numpy; #https://github.com/y-p/numpy/archive/1.6.2_with_travis_fix.tar.gz;
+else
+ pip $PIP_ARGS install https://github.com/numpy/numpy/archive/v1.7.0b2.tar.gz;
+fi
+
+# Optional Deps
+if [ x"$FULL_DEPS" == x"true" ]; then
+ echo "Installing FULL_DEPS"
+ if [ ${TRAVIS_PYTHON_VERSION} == "2.7" ]; then
+ sudo apt-get $APT_ARGS install python-scipy;
+ fi
+
+ if [ ${TRAVIS_PYTHON_VERSION} == "3.2" ]; then
+ sudo apt-get $APT_ARGS install python3-scipy;
+ fi
+
+ pip install $PIP_ARGS --use-mirrors openpyxl pytz matplotlib;
+ pip install $PIP_ARGS --use-mirrors xlrd xlwt;
+fi
+
+if [ x"$VBENCH" == x"true" ]; then
+ pip $PIP_ARGS install sqlalchemy git+git://github.com/pydata/vbench.git;
+fi
diff --git a/ci/print_versions.py b/ci/print_versions.py
new file mode 100755
index 0000000000000..71539bbccf23b
--- /dev/null
+++ b/ci/print_versions.py
@@ -0,0 +1,75 @@
+#!/usr/bin/env python
+import sys
+
+print("\nINSTALLED VERSIONS")
+print("------------------")
+print("Python: %d.%d.%d.%s.%s" % sys.version_info[:])
+
+try:
+ import Cython
+ print("Cython: %s" % Cython.__version__)
+except:
+ print("Cython: Not installed")
+
+try:
+ import numpy
+ print("Numpy: %s" % numpy.version.version)
+except:
+ print("Numpy: Not installed")
+
+try:
+ import scipy
+ print("Scipy: %s" % scipy.version.version)
+except:
+ print("Scipy: Not installed")
+
+try:
+ import dateutil
+ print("dateutil: %s" % dateutil.__version__)
+except:
+ print("dateutil: Not installed")
+
+try:
+ import pytz
+ print("pytz: %s" % pytz.VERSION)
+except:
+ print("pytz: Not installed")
+
+try:
+ import tables
+ print("PyTables: %s" % tables.__version__)
+except:
+ print("PyTables: Not Installed")
+
+
+try:
+ import matplotlib
+ print("matplotlib: %s" % matplotlib.__version__)
+except:
+ print("matplotlib: Not installed")
+
+try:
+ import openpyxl
+ print("openpyxl: %s" % openpyxl.__version__)
+except:
+ print("openpyxl: Not installed")
+
+try:
+ import xlrd
+ print("xlrd: %s" % xlrd.__VERSION__)
+except:
+ print("xlrd: Not installed")
+
+try:
+ import xlwt
+ print("xlwt: %s" % xlwt.__VERSION__)
+except:
+ print("xlwt: Not installed")
+
+try:
+ import sqlalchemy
+ print("sqlalchemy: %s" % sqlalchemy.__version__)
+except:
+ print("sqlalchemy: Not installed")
+
+print("\n")
diff --git a/ci/script.sh b/ci/script.sh
new file mode 100755
index 0000000000000..cc0a000128ffb
--- /dev/null
+++ b/ci/script.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+
+echo "inside $0"
+
+python setup.py build_ext install
+if [ x"$VBENCH" != x"true" ]; then
+ nosetests --exe -w /tmp -A "not slow" pandas;
+ exit
+fi
+if [ x"$VBENCH" == x"true" ]; then
+ python vb_suite/perf_HEAD.py;
+ exit
+fi
diff --git a/vb_suite/perf_HEAD.py b/vb_suite/perf_HEAD.py
index a5bfa2a12856b..64406d01978a4 100755
--- a/vb_suite/perf_HEAD.py
+++ b/vb_suite/perf_HEAD.py
@@ -75,11 +75,11 @@ def get_vbench_log(build_url):
return
s=json.loads(r.read())
- s=[x for x in s['matrix'] if x['config'].get('env')]
+ s=[x for x in s['matrix'] if x['config'].get('env',{}).get('VBENCH')]
#s=[x for x in s['matrix']]
if not s:
return
- id=s[0]['id']
+ id=s[0]['id'] # should be just one for now
r2=urllib2.urlopen("https://api.travis-ci.org/jobs/%s" % id)
if (not 200 <= r.getcode() < 300):
return
| - Moved travis build logic to ci directory.
- Added testing for 3.3 (with numpy 1.7b2).
- Prints version strings for all installed components.
- added FULL_DEPS build for 2.7 and 3.2, which includes
scipy and matplotlib tests.
- Made the log less verbose.
- Travis now shows an "env" columnn.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2408 | 2012-12-03T01:34:37Z | 2012-12-05T19:40:34Z | null | 2012-12-05T19:40:34Z |
Docs for Panel4D/panelnd and Minor Enhancements | diff --git a/RELEASE.rst b/RELEASE.rst
index 2ba2296064612..4f420f45a5c91 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -43,6 +43,11 @@ pandas 0.10.0
`describe_option`, and `reset_option`. Deprecate `set_printoptions` and
`reset_printoptions` (#2393)
+**Experimental Features**
+ - Add support for Panel4D, a named 4 Dimensional stucture
+ - Add support for ndpanel factory functions, to create custom, domain-specific
+ N Dimensional containers
+
**API Changes**
- inf/-inf are no longer considered as NA by isnull/notnull. To be clear, this
@@ -85,6 +90,7 @@ pandas 0.10.0
- Add `line_terminator` option to DataFrame.to_csv (#2383)
- added implementation of str(x)/unicode(x)/bytes(x) to major pandas data
structures, which should do the right thing on both py2.x and py3.x. (#2224)
+ - Added boolean comparison operators to Panel
**Bug fixes**
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 3c3c67092c8f1..da3a7338d4fb2 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -798,3 +798,122 @@ method:
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['a', 'b', 'c', 'd'])
panel.to_frame()
+
+Panel4D (Experimental)
+----------------------
+
+``Panel4D`` is a 4-Dimensional named container very much like a ``Panel``, but
+having 4 named dimensions. It is intended as a test bed for more N-Dimensional named
+containers.
+
+ - **labels**: axis 0, each item corresponds to a Panel contained inside
+ - **items**: axis 1, each item corresponds to a DataFrame contained inside
+ - **major_axis**: axis 2, it is the **index** (rows) of each of the
+ DataFrames
+ - **minor_axis**: axis 3, it is the **columns** of each of the DataFrames
+
+
+``Panel4D`` is a sub-class of ``Panel``, so most methods that work on Panels are
+applicable to Panel4D. The following methods are disabled:
+
+ - ``join , to_frame , to_excel , to_sparse , groupby``
+
+Construction of Panel4D works in a very similar manner to a ``Panel``
+
+From 4D ndarray with optional axis labels
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. ipython:: python
+
+ p4d = Panel4D(randn(2, 2, 5, 4),
+ labels=['Label1','Label2'],
+ items=['Item1', 'Item2'],
+ major_axis=date_range('1/1/2000', periods=5),
+ minor_axis=['A', 'B', 'C', 'D'])
+ p4d
+
+
+From dict of Panel objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. ipython:: python
+
+ data = { 'Label1' : Panel({ 'Item1' : DataFrame(randn(4, 3)) }),
+ 'Label2' : Panel({ 'Item2' : DataFrame(randn(4, 2)) }) }
+ Panel4D(data)
+
+Note that the values in the dict need only be **convertible to Panels**.
+Thus, they can be any of the other valid inputs to Panel as per above.
+
+Slicing
+~~~~~~~
+
+Slicing works in a similar manner to a Panel. ``[]`` slices the first dimension.
+``.ix`` allows you to slice abitrarily and get back lower dimensional objects
+
+.. ipython:: python
+
+ p4d['Label1']
+
+4D -> Panel
+
+.. ipython:: python
+
+ p4d.ix[:,:,:,'A']
+
+4D -> DataFrame
+
+.. ipython:: python
+
+ p4d.ix[:,:,0,'A']
+
+4D -> Series
+
+.. ipython:: python
+
+ p4d.ix[:,0,0,'A']
+
+Transposing
+~~~~~~~~~~~
+
+A Panel4D can be rearranged using its ``transpose`` method (which does not make a
+copy by default unless the data are heterogeneous):
+
+.. ipython:: python
+
+ p4d.transpose(3, 2, 1, 0)
+
+PanelND (Experimental)
+----------------------
+
+PanelND is a module with a set of factory functions to enable a user to construct N-dimensional named
+containers like Panel4D, with a custom set of axis labels. Thus a domain-specific container can easily be
+created.
+
+The following creates a Panel5D. A new panel type object must be sliceable into a lower dimensional object.
+Here we slice to a Panel4D.
+
+.. ipython:: python
+
+ from pandas.core import panelnd
+ Panel5D = panelnd.create_nd_panel_factory(
+ klass_name = 'Panel5D',
+ axis_orders = [ 'cool', 'labels','items','major_axis','minor_axis'],
+ axis_slices = { 'labels' : 'labels', 'items' : 'items',
+ 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = Panel4D,
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+ p5d = Panel5D(dict(C1 = p4d))
+ p5d
+
+ # print a slice of our 5D
+ p5d.ix['C1',:,:,0:3,:]
+
+ # transpose it
+ p5d.transpose(1,2,3,4,0)
+
+ # look at the shape & dim
+ p5d.shape
+ p5d.ndim
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 86741304ef479..b058dd5683db5 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -110,6 +110,23 @@ Updated PyTables Support
import os
os.remove('store.h5')
+N Dimensional Panels (Experimental)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Adding experimental support for Panel4D and factory functions to create n-dimensional named panels.
+:ref:`Docs <dsintro-panel4d>` for NDim. Here is a taste of what to expect.
+
+ .. ipython:: python
+
+ p4d = Panel4D(randn(2, 2, 5, 4),
+ labels=['Label1','Label2'],
+ items=['Item1', 'Item2'],
+ major_axis=date_range('1/1/2000', periods=5),
+ minor_axis=['A', 'B', 'C', 'D'])
+ p4d
+
+
+
API changes
~~~~~~~~~~~
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 41aaaa2d15e9b..f3c8289f31dff 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -13,6 +13,7 @@
_get_combined_index)
from pandas.core.indexing import _NDFrameIndexer, _maybe_droplevels
from pandas.core.internals import BlockManager, make_block, form_blocks
+from pandas.core.series import Series
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame
from pandas.util import py3compat
@@ -104,7 +105,7 @@ def f(self, other):
def _panel_arith_method(op, name):
@Substitution(op)
- def f(self, other, axis='items'):
+ def f(self, other, axis = 0):
"""
Wrapper method for %s
@@ -123,6 +124,44 @@ def f(self, other, axis='items'):
f.__name__ = name
return f
+def _comp_method(func, name):
+
+ def na_op(x, y):
+ try:
+ result = func(x, y)
+ except TypeError:
+ xrav = x.ravel()
+ result = np.empty(x.size, dtype=x.dtype)
+ if isinstance(y, np.ndarray):
+ yrav = y.ravel()
+ mask = notnull(xrav) & notnull(yrav)
+ result[mask] = func(np.array(list(xrav[mask])),
+ np.array(list(yrav[mask])))
+ else:
+ mask = notnull(xrav)
+ result[mask] = func(np.array(list(xrav[mask])), y)
+
+ if func == operator.ne: # pragma: no cover
+ np.putmask(result, -mask, True)
+ else:
+ np.putmask(result, -mask, False)
+ result = result.reshape(x.shape)
+
+ return result
+
+ @Appender('Wrapper for comparison method %s' % name)
+ def f(self, other):
+ if isinstance(other, self._constructor):
+ return self._compare_constructor(other, func)
+ elif isinstance(other, (self._constructor_sliced, DataFrame, Series)):
+ raise Exception("input needs alignment for this object [%s]" % self._constructor)
+ else:
+ return self._combine_const(other, na_op)
+
+
+ f.__name__ = name
+
+ return f
_agg_doc = """
Return %(desc)s over requested axis
@@ -280,7 +319,6 @@ def _init_dict(self, data, axes, dtype=None):
# shallow copy
arrays = []
- reshaped_data = data.copy()
haxis_shape = [ len(a) for a in raxes ]
for h in haxis:
v = values = data.get(h)
@@ -401,6 +439,51 @@ def __array_wrap__(self, result):
d['copy'] = False
return self._constructor(result, **d)
+ #----------------------------------------------------------------------
+ # Comparison methods
+
+ def _indexed_same(self, other):
+ return all([ getattr(self,a).equals(getattr(other,a)) for a in self._AXIS_ORDERS ])
+
+ def _compare_constructor(self, other, func):
+ if not self._indexed_same(other):
+ raise Exception('Can only compare identically-labeled '
+ 'same type objects')
+
+ new_data = {}
+ for col in getattr(self,self._info_axis):
+ new_data[col] = func(self[col], other[col])
+
+ d = self._construct_axes_dict()
+ d['copy'] = False
+ return self._constructor(data=new_data, **d)
+
+ # boolean operators
+ __and__ = _arith_method(operator.and_, '__and__')
+ __or__ = _arith_method(operator.or_, '__or__')
+ __xor__ = _arith_method(operator.xor, '__xor__')
+
+ def __neg__(self):
+ return -1 * self
+
+ def __invert__(self):
+ return -1 * self
+
+ # Comparison methods
+ __eq__ = _comp_method(operator.eq, '__eq__')
+ __ne__ = _comp_method(operator.ne, '__ne__')
+ __lt__ = _comp_method(operator.lt, '__lt__')
+ __gt__ = _comp_method(operator.gt, '__gt__')
+ __le__ = _comp_method(operator.le, '__le__')
+ __ge__ = _comp_method(operator.ge, '__ge__')
+
+ eq = _comp_method(operator.eq, 'eq')
+ ne = _comp_method(operator.ne, 'ne')
+ gt = _comp_method(operator.gt, 'gt')
+ lt = _comp_method(operator.lt, 'lt')
+ ge = _comp_method(operator.ge, 'ge')
+ le = _comp_method(operator.le, 'le')
+
#----------------------------------------------------------------------
# Magic methods
@@ -435,14 +518,14 @@ def __unicode__(self):
class_name = str(self.__class__)
shape = self.shape
- dims = 'Dimensions: %s' % ' x '.join([ "%d (%s)" % (s, a) for a,s in zip(self._AXIS_ORDERS,shape) ])
+ dims = u'Dimensions: %s' % ' x '.join([ "%d (%s)" % (s, a) for a,s in zip(self._AXIS_ORDERS,shape) ])
def axis_pretty(a):
v = getattr(self,a)
if len(v) > 0:
- return '%s axis: %s to %s' % (a.capitalize(),v[0],v[-1])
+ return u'%s axis: %s to %s' % (a.capitalize(),com.pprint_thing(v[0]),com.pprint_thing(v[-1]))
else:
- return '%s axis: None' % a.capitalize()
+ return u'%s axis: None' % a.capitalize()
output = '\n'.join([class_name, dims] + [axis_pretty(a) for a in self._AXIS_ORDERS])
@@ -496,9 +579,9 @@ def ix(self):
return self._ix
def _wrap_array(self, arr, axes, copy=False):
- items, major, minor = axes
- return self._constructor(arr, items=items, major_axis=major,
- minor_axis=minor, copy=copy)
+ d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,axes) ])
+ d['copy'] = False
+ return self._constructor(arr, **d)
fromDict = from_dict
@@ -742,7 +825,10 @@ def reindex(self, major=None, minor=None, method=None,
if (method is None and not self._is_mixed_type and al <= 3):
items = kwargs.get('items')
if com._count_not_none(items, major, minor) == 3:
- return self._reindex_multi(items, major, minor)
+ try:
+ return self._reindex_multi(items, major, minor)
+ except:
+ pass
if major is not None:
result = result._reindex_axis(major, method, al-2, copy)
@@ -874,12 +960,12 @@ def _combine(self, other, func, axis=0):
elif isinstance(other, DataFrame):
return self._combine_frame(other, func, axis=axis)
elif np.isscalar(other):
- new_values = func(self.values, other)
- d = self._construct_axes_dict()
- return self._constructor(new_values, **d)
+ return self._combine_const(other, func)
- def __neg__(self):
- return -1 * self
+ def _combine_const(self, other, func):
+ new_values = func(self.values, other)
+ d = self._construct_axes_dict()
+ return self._constructor(new_values, **d)
def _combine_frame(self, other, func, axis=0):
index, columns = self._get_plane_axes(axis)
@@ -1434,8 +1520,8 @@ def update(self, other, join='left', overwrite=True, filter_func=None,
contain data in the same place.
"""
- if not isinstance(other, Panel):
- other = Panel(other)
+ if not isinstance(other, self._constructor):
+ other = self._constructor(other)
other = other.reindex(items=self.items)
diff --git a/pandas/core/panel4d.py b/pandas/core/panel4d.py
index 504111bef5414..fe99d6c0eab78 100644
--- a/pandas/core/panel4d.py
+++ b/pandas/core/panel4d.py
@@ -1,112 +1,39 @@
""" Panel4D: a 4-d dict like collection of panels """
from pandas.core.panel import Panel
+from pandas.core import panelnd
import pandas.lib as lib
+Panel4D = panelnd.create_nd_panel_factory(
+ klass_name = 'Panel4D',
+ axis_orders = [ 'labels','items','major_axis','minor_axis'],
+ axis_slices = { 'labels' : 'labels', 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = Panel,
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+
+
+def panel4d_init(self, data=None, labels=None, items=None, major_axis=None, minor_axis=None, copy=False, dtype=None):
+ """
+ Represents a 4 dimensonal structured
+
+ Parameters
+ ----------
+ data : ndarray (labels x items x major x minor), or dict of Panels
+
+ labels : Index or array-like : axis=0
+ items : Index or array-like : axis=1
+ major_axis : Index or array-like: axis=2
+ minor_axis : Index or array-like: axis=3
+
+ dtype : dtype, default None
+ Data type to force, otherwise infer
+ copy : boolean, default False
+ Copy data from inputs. Only affects DataFrame / 2d ndarray input
+ """
+ self._init_data( data=data, labels=labels, items=items, major_axis=major_axis, minor_axis=minor_axis,
+ copy=copy, dtype=dtype)
+Panel4D.__init__ = panel4d_init
-class Panel4D(Panel):
- _AXIS_ORDERS = ['labels','items','major_axis','minor_axis']
- _AXIS_NUMBERS = dict([ (a,i) for i, a in enumerate(_AXIS_ORDERS) ])
- _AXIS_ALIASES = {
- 'major' : 'major_axis',
- 'minor' : 'minor_axis'
- }
- _AXIS_NAMES = dict([ (i,a) for i, a in enumerate(_AXIS_ORDERS) ])
- _AXIS_SLICEMAP = {
- 'items' : 'items',
- 'major_axis' : 'major_axis',
- 'minor_axis' : 'minor_axis'
- }
- _AXIS_LEN = len(_AXIS_ORDERS)
-
- # major
- _default_stat_axis = 2
-
- # info axis
- _het_axis = 0
- _info_axis = _AXIS_ORDERS[_het_axis]
-
- labels = lib.AxisProperty(0)
- items = lib.AxisProperty(1)
- major_axis = lib.AxisProperty(2)
- minor_axis = lib.AxisProperty(3)
-
- _constructor_sliced = Panel
-
- def __init__(self, data=None, labels=None, items=None, major_axis=None, minor_axis=None, copy=False, dtype=None):
- """
- Represents a 4 dimensonal structured
-
- Parameters
- ----------
- data : ndarray (labels x items x major x minor), or dict of Panels
-
- labels : Index or array-like : axis=0
- items : Index or array-like : axis=1
- major_axis : Index or array-like: axis=2
- minor_axis : Index or array-like: axis=3
-
- dtype : dtype, default None
- Data type to force, otherwise infer
- copy : boolean, default False
- Copy data from inputs. Only affects DataFrame / 2d ndarray input
- """
- self._init_data( data=data, labels=labels, items=items, major_axis=major_axis, minor_axis=minor_axis,
- copy=copy, dtype=dtype)
-
- def _get_plane_axes(self, axis):
- axis = self._get_axis_name(axis)
-
- if axis == 'major_axis':
- items = self.labels
- major = self.items
- minor = self.minor_axis
- elif axis == 'minor_axis':
- items = self.labels
- major = self.items
- minor = self.major_axis
- elif axis == 'items':
- items = self.labels
- major = self.major_axis
- minor = self.minor_axis
- elif axis == 'labels':
- items = self.items
- major = self.major_axis
- minor = self.minor_axis
-
- return items, major, minor
-
- def _combine(self, other, func, axis=0):
- if isinstance(other, Panel4D):
- return self._combine_panel4d(other, func)
- return super(Panel4D, self)._combine(other, func, axis=axis)
-
- def _combine_panel4d(self, other, func):
- labels = self.labels + other.labels
- items = self.items + other.items
- major = self.major_axis + other.major_axis
- minor = self.minor_axis + other.minor_axis
-
- # could check that everything's the same size, but forget it
- this = self.reindex(labels=labels, items=items, major=major, minor=minor)
- other = other.reindex(labels=labels, items=items, major=major, minor=minor)
-
- result_values = func(this.values, other.values)
-
- return self._constructor(result_values, labels, items, major, minor)
-
- def join(self, other, how='left', lsuffix='', rsuffix=''):
- if isinstance(other, Panel4D):
- join_major, join_minor = self._get_join_index(other, how)
- this = self.reindex(major=join_major, minor=join_minor)
- other = other.reindex(major=join_major, minor=join_minor)
- merged_data = this._data.merge(other._data, lsuffix, rsuffix)
- return self._constructor(merged_data)
- return super(Panel4D, self).join(other=other,how=how,lsuffix=lsuffix,rsuffix=rsuffix)
-
- ### remove operations ####
- def to_frame(self, *args, **kwargs):
- raise NotImplementedError
- def to_excel(self, *args, **kwargs):
- raise NotImplementedError
diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py
index e4638750aa1b2..22f6dac6b640c 100644
--- a/pandas/core/panelnd.py
+++ b/pandas/core/panelnd.py
@@ -46,7 +46,7 @@ def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_a
for i, a in enumerate(axis_orders):
setattr(klass,a,lib.AxisProperty(i))
- # define the __init__
+ #### define the methods ####
def __init__(self, *args, **kwargs):
if not (kwargs.get('data') or len(args)):
raise Exception("must supply at least a data argument to [%s]" % klass_name)
@@ -57,8 +57,8 @@ def __init__(self, *args, **kwargs):
self._init_data( *args, **kwargs)
klass.__init__ = __init__
- # define _get_place_axes
def _get_plane_axes(self, axis):
+
axis = self._get_axis_name(axis)
index = self._AXIS_ORDERS.index(axis)
@@ -66,18 +66,40 @@ def _get_plane_axes(self, axis):
if index:
planes.extend(self._AXIS_ORDERS[0:index])
if index != self._AXIS_LEN:
- planes.extend(self._AXIS_ORDERS[index:])
-
- return planes
- klass._get_plane_axes
-
- # remove these operations
- def to_frame(self, *args, **kwargs):
- raise NotImplementedError
- klass.to_frame = to_frame
- def to_excel(self, *args, **kwargs):
- raise NotImplementedError
- klass.to_excel = to_excel
+ planes.extend(self._AXIS_ORDERS[index+1:])
+
+ return [ getattr(self,p) for p in planes ]
+ klass._get_plane_axes = _get_plane_axes
+
+ def _combine(self, other, func, axis=0):
+ if isinstance(other, klass):
+ return self._combine_with_constructor(other, func)
+ return super(klass, self)._combine(other, func, axis=axis)
+ klass._combine = _combine
+
+ def _combine_with_constructor(self, other, func):
+
+ # combine labels to form new axes
+ new_axes = []
+ for a in self._AXIS_ORDERS:
+ new_axes.append(getattr(self,a) + getattr(other,a))
+
+ # reindex: could check that everything's the same size, but forget it
+ d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,new_axes) ])
+ d['copy'] = False
+ this = self.reindex(**d)
+ other = other.reindex(**d)
+
+ result_values = func(this.values, other.values)
+
+ return self._constructor(result_values, **d)
+ klass._combine_with_constructor = _combine_with_constructor
+
+ # set as NonImplemented operations which we don't support
+ for f in ['to_frame','to_excel','to_sparse','groupby','join','_get_join_index']:
+ def func(self, *args, **kwargs):
+ raise NotImplementedError
+ setattr(klass,f,func)
return klass
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 1d8b40d34dba2..cb4cc9eef5c88 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -642,6 +642,56 @@ def _check_view(self, indexer, comp):
self.assert_((obj.values == 0).all())
comp(cp.ix[indexer].reindex_like(obj), obj)
+ def test_logical_with_nas(self):
+ d = Panel({ 'ItemA' : {'a': [np.nan, False] }, 'ItemB' : { 'a': [True, True] } })
+
+ result = d['ItemA'] | d['ItemB']
+ expected = DataFrame({ 'a' : [np.nan, True] })
+ assert_frame_equal(result, expected)
+
+ result = d['ItemA'].fillna(False) | d['ItemB']
+ expected = DataFrame({ 'a' : [True, True] }, dtype=object)
+ assert_frame_equal(result, expected)
+
+ def test_neg(self):
+ # what to do?
+ assert_panel_equal(-self.panel, -1 * self.panel)
+
+ def test_invert(self):
+ assert_panel_equal(-(self.panel < 0), ~(self.panel <0))
+
+ def test_comparisons(self):
+ p1 = tm.makePanel()
+ p2 = tm.makePanel()
+
+ tp = p1.reindex(items = p1.items + ['foo'])
+ df = p1[p1.items[0]]
+
+ def test_comp(func):
+
+ # versus same index
+ result = func(p1, p2)
+ self.assert_(np.array_equal(result.values,
+ func(p1.values, p2.values)))
+
+ # versus non-indexed same objs
+ self.assertRaises(Exception, func, p1, tp)
+
+ # versus different objs
+ self.assertRaises(Exception, func, p1, df)
+
+ # versus scalar
+ result3 = func(self.panel, 0)
+ self.assert_(np.array_equal(result3.values,
+ func(self.panel.values, 0)))
+
+ test_comp(operator.eq)
+ test_comp(operator.ne)
+ test_comp(operator.lt)
+ test_comp(operator.gt)
+ test_comp(operator.ge)
+ test_comp(operator.le)
+
def test_get_value(self):
for item in self.panel.items:
for mjr in self.panel.major_axis[::2]:
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 8340d55604e24..bdfc4933f31ab 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -368,6 +368,51 @@ def test_setitem(self):
self.panel4d['lP'] = self.panel4d['l1'] > 0
self.assert_(self.panel4d['lP'].values.dtype == np.bool_)
+ def test_comparisons(self):
+ p1 = tm.makePanel4D()
+ p2 = tm.makePanel4D()
+
+ tp = p1.reindex(labels = p1.labels + ['foo'])
+ p = p1[p1.labels[0]]
+
+ def test_comp(func):
+ result = func(p1, p2)
+ self.assert_(np.array_equal(result.values,
+ func(p1.values, p2.values)))
+
+ # versus non-indexed same objs
+ self.assertRaises(Exception, func, p1, tp)
+
+ # versus different objs
+ self.assertRaises(Exception, func, p1, p)
+
+ result3 = func(self.panel4d, 0)
+ self.assert_(np.array_equal(result3.values,
+ func(self.panel4d.values, 0)))
+
+ test_comp(operator.eq)
+ test_comp(operator.ne)
+ test_comp(operator.lt)
+ test_comp(operator.gt)
+ test_comp(operator.ge)
+ test_comp(operator.le)
+
+ def test_setitem_ndarray(self):
+ raise nose.SkipTest
+ # from pandas import DateRange, datetools
+
+ # timeidx = DateRange(start=datetime(2009,1,1),
+ # end=datetime(2009,12,31),
+ # offset=datetools.MonthEnd())
+ # lons_coarse = np.linspace(-177.5, 177.5, 72)
+ # lats_coarse = np.linspace(-87.5, 87.5, 36)
+ # P = Panel(items=timeidx, major_axis=lons_coarse, minor_axis=lats_coarse)
+ # data = np.random.randn(72*36).reshape((72,36))
+ # key = datetime(2009,2,28)
+ # P[key] = data#
+
+ # assert_almost_equal(P[key].values, data)
+
def test_major_xs(self):
ref = self.panel4d['l1']['ItemA']
| updated whatsnew/RELEASE
added boolean comparison methods to panel.py (didn't exist before)
all tests pass
need to update the link to dsintro-panel4d section in the whatsnew docs....
| https://api.github.com/repos/pandas-dev/pandas/pulls/2407 | 2012-12-03T01:00:29Z | 2012-12-07T16:06:56Z | 2012-12-07T16:06:56Z | 2014-06-12T10:20:37Z |
Pytables support for hierarchical keys | diff --git a/.gitignore b/.gitignore
index 320f03a0171a2..eb26b3cedc724 100644
--- a/.gitignore
+++ b/.gitignore
@@ -11,6 +11,7 @@ MANIFEST
*.cpp
*.so
*.pyd
+*.h5
pandas/version.py
doc/source/generated
doc/source/_static
diff --git a/RELEASE.rst b/RELEASE.rst
index 49f45fce13381..7746d8bd587ea 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -57,6 +57,7 @@ pandas 0.10.0
- `obj.fillna()` is no longer valid; make `method='pad'` no longer the
default option, to be more explicit about what kind of filling to
perform. Add `ffill/bfill` convenience functions per above (#2284)
+ - `HDFStore.keys()` now returns an absolute path-name for each key
**Improvements to existing features**
@@ -68,6 +69,7 @@ pandas 0.10.0
- Add ``normalize`` option to Series/DataFrame.asfreq (#2137)
- SparseSeries and SparseDataFrame construction from empty and scalar
values now no longer create dense ndarrays unnecessarily (#2322)
+ - ``HDFStore`` now supports hierarchial keys (#2397)
- Support multiple query selection formats for ``HDFStore tables`` (#1996)
- Support ``del store['df']`` syntax to delete HDFStores
- Add multi-dtype support for ``HDFStore tables``
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 272e35fc7400d..a81899078f3ae 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -865,6 +865,25 @@ after data is already in the table (this may become automatic in the future or a
store.create_table_index('df')
store.handle.root.df.table
+Hierarchical Keys
+~~~~~~~~~~~~~~~~~
+
+Keys to a store can be specified as a string. These can be in a hierarchical path-name like format (e.g. ``foo/bar/bah``), which will generate a hierarchy of sub-stores (or ``Groups`` in PyTables parlance). Keys can be specified with out the leading '/' and are ALWAYS absolute (e.g. 'foo' refers to '/foo'). Removal operations can remove everying in the sub-store and BELOW, so be *careful*.
+
+.. ipython:: python
+
+ store.put('foo/bar/bah', df)
+ store.append('food/orange', df)
+ store.append('food/apple', df)
+ store
+
+ # a list of keys are returned
+ store.keys()
+
+ # remove all nodes under this level
+ store.remove('food')
+ store
+
Storing Mixed Types in a Table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 98eb4746c7d79..cb6711f4679a9 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -63,6 +63,19 @@ Updated PyTables Support
**Enhancements**
+ - added ability to hierarchical keys
+
+ .. ipython:: python
+
+ store.put('foo/bar/bah', df)
+ store.append('food/orange', df)
+ store.append('food/apple', df)
+ store
+
+ # remove all nodes under this level
+ store.remove('food')
+ store
+
- added mixed-dtype support!
.. ipython:: python
@@ -77,7 +90,7 @@ Updated PyTables Support
- performance improvments on table writing
- support for arbitrarily indexed dimensions
- - ``SparseSeries`` now has a ``density`` property (#2384)
+ - ``SparseSeries`` now has a ``density`` property (#2384)
**Bug Fixes**
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 5a5d9d2942ace..8af7151cb898c 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -90,7 +90,6 @@ def _tables():
return _table_mod
-
@contextmanager
def get_store(path, mode='a', complevel=None, complib=None,
fletcher32=False):
@@ -197,6 +196,11 @@ def __init__(self, path, mode='a', complevel=None, complib=None,
self.filters = None
self.open(mode=mode, warn=False)
+ @property
+ def root(self):
+ """ return the root node """
+ return self.handle.root
+
def __getitem__(self, key):
return self.get(key)
@@ -207,26 +211,39 @@ def __delitem__(self, key):
return self.remove(key)
def __contains__(self, key):
- return hasattr(self.handle.root, key)
+ """ check for existance of this key
+ can match the exact pathname or the pathnm w/o the leading '/'
+ """
+ node = self.get_node(key)
+ if node is not None:
+ name = node._v_pathname
+ return re.search(key,name) is not None
+ return False
def __len__(self):
- return len(self.handle.root._v_children)
+ return len(self.groups())
def __repr__(self):
output = '%s\nFile path: %s\n' % (type(self), self.path)
- if len(self) > 0:
- keys = []
+ groups = self.groups()
+ if len(groups) > 0:
+ keys = []
values = []
- for k, v in sorted(self.handle.root._v_children.iteritems()):
- kind = getattr(v._v_attrs,'pandas_type',None)
+ for n in sorted(groups, key = lambda x: x._v_name):
+ kind = getattr(n._v_attrs,'pandas_type',None)
- keys.append(str(k))
+ keys.append(str(n._v_pathname))
- if kind is None:
+ # a table
+ if _is_table_type(n):
+ values.append(str(create_table(self, n)))
+
+ # a group
+ elif kind is None:
values.append('unknown type')
- elif _is_table_type(v):
- values.append(str(create_table(self, v)))
+
+ # another type of pandas object
else:
values.append(_NAME_MAP[kind])
@@ -239,9 +256,9 @@ def __repr__(self):
def keys(self):
"""
Return a (potentially unordered) list of the keys corresponding to the
- objects stored in the HDFStore
+ objects stored in the HDFStore. These are ABSOLUTE path-names (e.g. have the leading '/'
"""
- return self.handle.root._v_children.keys()
+ return [ n._v_pathname for n in self.groups() ]
def open(self, mode='a', warn=True):
"""
@@ -304,12 +321,10 @@ def get(self, key):
-------
obj : type of object stored in file
"""
- exc_type = _tables().NoSuchNodeError
- try:
- group = getattr(self.handle.root, key)
- return self._read_group(group)
- except (exc_type, AttributeError):
+ group = self.get_node(key)
+ if group is None:
raise KeyError('No object named %s in the file' % key)
+ return self._read_group(group)
def select(self, key, where=None):
"""
@@ -322,11 +337,12 @@ def select(self, key, where=None):
where : list of Term (or convertable) objects, optional
"""
- group = getattr(self.handle.root, key, None)
+ group = self.get_node(key)
+ if group is None:
+ raise KeyError('No object named %s in the file' % key)
if where is not None and not _is_table_type(group):
raise Exception('can only select with where on objects written as tables')
- if group is not None:
- return self._read_group(group, where)
+ return self._read_group(group, where)
def put(self, key, value, table=False, append=False,
compression=None, **kwargs):
@@ -352,9 +368,6 @@ def put(self, key, value, table=False, append=False,
self._write_to_group(key, value, table=table, append=append,
comp=compression, **kwargs)
- def _get_handler(self, op, kind):
- return getattr(self, '_%s_%s' % (op, kind))
-
def remove(self, key, where=None):
"""
Remove pandas object partially by specifying the where condition
@@ -372,15 +385,21 @@ def remove(self, key, where=None):
number of rows removed (or None if not a Table)
"""
- if where is None:
- self.handle.removeNode(self.handle.root, key, recursive=True)
- else:
- group = getattr(self.handle.root, key, None)
- if group is not None:
+ group = self.get_node(key)
+ if group is not None:
+
+ # remove the node
+ if where is None or not len(where):
+ group = self.get_node(key)
+ group._f_remove(recursive=True)
+
+ # delete from the table
+ else:
if not _is_table_type(group):
raise Exception('can only remove with where on objects written as tables')
t = create_table(self, group)
return t.delete(where)
+
return None
def append(self, key, value, **kwargs):
@@ -416,20 +435,50 @@ def create_table_index(self, key, **kwargs):
if not _table_supports_index:
raise("PyTables >= 2.3 is required for table indexing")
- group = getattr(self.handle.root, key, None)
+ group = self.get_node(key)
if group is None: return
if not _is_table_type(group):
raise Exception("cannot create table index on a non-table")
create_table(self, group).create_index(**kwargs)
+ def groups(self):
+ """ return a list of all the groups (that are not themselves a pandas storage object) """
+ return [ g for g in self.handle.walkGroups() if getattr(g._v_attrs,'pandas_type',None) ]
+
+ def get_node(self, key):
+ """ return the node with the key or None if it does not exist """
+ try:
+ if not key.startswith('/'):
+ key = '/' + key
+ return self.handle.getNode(self.root,key)
+ except:
+ return None
+
+ ###### private methods ######
+
+ def _get_handler(self, op, kind):
+ return getattr(self, '_%s_%s' % (op, kind))
+
def _write_to_group(self, key, value, table=False, append=False,
comp=None, **kwargs):
- root = self.handle.root
- if key not in root._v_children:
- group = self.handle.createGroup(root, key)
- else:
- group = getattr(root, key)
+ group = self.get_node(key)
+ if group is None:
+ paths = key.split('/')
+
+ # recursively create the groups
+ path = '/'
+ for p in paths:
+ if not len(p):
+ continue
+ new_path = path
+ if not path.endswith('/'):
+ new_path += '/'
+ new_path += p
+ group = self.get_node(new_path)
+ if group is None:
+ group = self.handle.createGroup(path, p)
+ path = new_path
kind = _TYPE_MAP[type(value)]
if table or (append and _is_table_type(group)):
@@ -1306,6 +1355,9 @@ class LegacyTable(Table):
_indexables = [Col(name = 'index'),Col(name = 'column', index_kind = 'columns_kind'), DataCol(name = 'fields', cname = 'values', kind_attr = 'fields') ]
table_type = 'legacy'
+ def write(self, **kwargs):
+ raise Exception("write operations are not allowed on legacy tables!")
+
def read(self, where=None):
""" we have 2 indexable columns, with an arbitrary number of data axes """
@@ -1380,6 +1432,21 @@ def read(self, where=None):
return wp
+class LegacyFrameTable(LegacyTable):
+ """ support the legacy frame table """
+ table_type = 'legacy_frame'
+ def read(self, *args, **kwargs):
+ return super(LegacyFrameTable, self).read(*args, **kwargs)['value']
+
+class LegacyPanelTable(LegacyTable):
+ """ support the legacy panel table """
+ table_type = 'legacy_panel'
+
+class AppendableTable(LegacyTable):
+ """ suppor the new appendable table formats """
+ _indexables = None
+ table_type = 'appendable'
+
def write(self, axes_to_index, obj, append=False, compression=None,
complevel=None, min_itemsize = None, **kwargs):
@@ -1488,22 +1555,6 @@ def delete(self, where = None):
# return the number of rows removed
return ln
-
-class LegacyFrameTable(LegacyTable):
- """ support the legacy frame table """
- table_type = 'legacy_frame'
- def read(self, *args, **kwargs):
- return super(LegacyFrameTable, self).read(*args, **kwargs)['value']
-
-class LegacyPanelTable(LegacyTable):
- """ support the legacy panel table """
- table_type = 'legacy_panel'
-
-class AppendableTable(LegacyTable):
- """ suppor the new appendable table formats """
- _indexables = None
- table_type = 'appendable'
-
class AppendableFrameTable(AppendableTable):
""" suppor the new appendable table formats """
table_type = 'appendable_frame'
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index ca2ea2e7089a0..b4ad98b8cb437 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -51,13 +51,14 @@ def test_factory_fun(self):
os.remove(self.scratchpath)
- def test_len_keys(self):
+ def test_keys(self):
self.store['a'] = tm.makeTimeSeries()
self.store['b'] = tm.makeStringSeries()
self.store['c'] = tm.makeDataFrame()
self.store['d'] = tm.makePanel()
- self.assertEquals(len(self.store), 4)
- self.assert_(set(self.store.keys()) == set(['a', 'b', 'c', 'd']))
+ self.store['foo/bar'] = tm.makePanel()
+ self.assertEquals(len(self.store), 5)
+ self.assert_(set(self.store.keys()) == set(['/a', '/b', '/c', '/d', '/foo/bar']))
def test_repr(self):
repr(self.store)
@@ -65,6 +66,7 @@ def test_repr(self):
self.store['b'] = tm.makeStringSeries()
self.store['c'] = tm.makeDataFrame()
self.store['d'] = tm.makePanel()
+ self.store['foo/bar'] = tm.makePanel()
self.store.append('e', tm.makePanel())
repr(self.store)
str(self.store)
@@ -72,9 +74,14 @@ def test_repr(self):
def test_contains(self):
self.store['a'] = tm.makeTimeSeries()
self.store['b'] = tm.makeDataFrame()
+ self.store['foo/bar'] = tm.makeDataFrame()
self.assert_('a' in self.store)
self.assert_('b' in self.store)
self.assert_('c' not in self.store)
+ self.assert_('foo/bar' in self.store)
+ self.assert_('/foo/bar' in self.store)
+ self.assert_('/foo/b' not in self.store)
+ self.assert_('bar' not in self.store)
def test_reopen_handle(self):
self.store['a'] = tm.makeTimeSeries()
@@ -92,6 +99,10 @@ def test_get(self):
right = self.store['a']
tm.assert_series_equal(left, right)
+ left = self.store.get('/a')
+ right = self.store['/a']
+ tm.assert_series_equal(left, right)
+
self.assertRaises(KeyError, self.store.get, 'b')
def test_put(self):
@@ -99,6 +110,9 @@ def test_put(self):
df = tm.makeTimeDataFrame()
self.store['a'] = ts
self.store['b'] = df[:10]
+ self.store['foo/bar/bah'] = df[:10]
+ self.store['foo'] = df[:10]
+ self.store['/foo'] = df[:10]
self.store.put('c', df[:10], table=True)
# not OK, not a table
@@ -156,6 +170,19 @@ def test_append(self):
store.append('df2', df[10:])
tm.assert_frame_equal(store['df2'], df)
+ store.append('/df3', df[:10])
+ store.append('/df3', df[10:])
+ tm.assert_frame_equal(store['df3'], df)
+
+ # this is allowed by almost always don't want to do it
+ import warnings
+ import tables
+ warnings.filterwarnings('ignore', category=tables.NaturalNameWarning)
+ store.append('/df3 foo', df[:10])
+ store.append('/df3 foo', df[10:])
+ tm.assert_frame_equal(store['df3 foo'], df)
+ warnings.filterwarnings('always', category=tables.NaturalNameWarning)
+
wp = tm.makePanel()
store.append('wp1', wp.ix[:,:10,:])
store.append('wp1', wp.ix[:,10:,:])
@@ -293,6 +320,18 @@ def test_remove(self):
self.store.remove('b')
self.assertEquals(len(self.store), 0)
+ # pathing
+ self.store['a'] = ts
+ self.store['b/foo'] = df
+ self.store.remove('foo')
+ self.store.remove('b/foo')
+ self.assertEquals(len(self.store), 1)
+
+ self.store['a'] = ts
+ self.store['b/foo'] = df
+ self.store.remove('b')
+ self.assertEquals(len(self.store), 1)
+
# __delitem__
self.store['a'] = ts
self.store['b'] = df
@@ -315,6 +354,17 @@ def test_remove_where(self):
expected = wp.reindex(minor_axis = ['B','C'])
tm.assert_panel_equal(rs,expected)
+ # empty where
+ self.store.remove('wp')
+ self.store.put('wp', wp, table=True)
+ self.store.remove('wp', [])
+
+ # non - empty where
+ self.store.remove('wp')
+ self.store.put('wp', wp, table=True)
+ self.assertRaises(Exception, self.store.remove,
+ 'wp', ['foo'])
+
# selectin non-table with a where
self.store.put('wp2', wp, table=False)
self.assertRaises(Exception, self.store.remove,
@@ -846,6 +896,19 @@ def test_legacy_table_read(self):
store.select('wp1')
store.close()
+ def test_legacy_table_write(self):
+ # legacy table types
+ pth = curpath()
+ df = tm.makeDataFrame()
+ wp = tm.makePanel()
+
+ store = HDFStore(os.path.join(pth, 'legacy_table.h5'), 'a')
+
+ self.assertRaises(Exception, store.append, 'df1', df)
+ self.assertRaises(Exception, store.append, 'wp1', wp)
+
+ store.close()
+
def test_store_datetime_fractional_secs(self):
dt = datetime(2012, 1, 2, 3, 4, 5, 123456)
series = Series([0], [dt])
| GH #2391
support for keys of the form: foo/bar/bah
includes docs & tests
| https://api.github.com/repos/pandas-dev/pandas/pulls/2401 | 2012-12-01T04:34:23Z | 2012-12-01T20:45:18Z | null | 2014-06-27T02:54:09Z |
Multiprocessing support for tests | diff --git a/.travis.yml b/.travis.yml
index ee60f9d7ad713..4b89726b607c9 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -33,7 +33,7 @@ install:
script:
- python setup.py build_ext install
- - 'if [ x"$VBENCH" != x"true" ]; then nosetests --exe -w /tmp -A "not slow" pandas; fi'
+ - 'if [ x"$VBENCH" != x"true" ]; then nosetests --exe -w /tmp -A "not slow" pandas --processes=4; fi'
- pwd
- 'if [ x"$VBENCH" == x"true" ]; then pip install sqlalchemy git+git://github.com/pydata/vbench.git; python vb_suite/perf_HEAD.py; fi'
- python -c "import numpy;print(numpy.version.version)"
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index ca2ea2e7089a0..600e3b04b9068 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -450,7 +450,7 @@ def test_sparse_frame(self):
def test_sparse_panel(self):
items = ['x', 'y', 'z']
- p = Panel(dict((i, tm.makeDataFrame()) for i in items))
+ p = Panel(dict((i, tm.makeDataFrame().ix[:2, :2]) for i in items))
sp = p.to_sparse()
self._check_double_roundtrip(sp, tm.assert_panel_equal,
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index df0188a18d6f3..a828c946686c5 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -10,14 +10,14 @@
from pandas import Series, Index, DataFrame
class TestSQLite(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.db = sqlite3.connect(':memory:')
def test_basic(self):
frame = tm.makeTimeDataFrame()
self._check_roundtrip(frame)
-
+
def test_write_row_by_row(self):
frame = tm.makeTimeDataFrame()
frame.ix[0, 0] = np.nan
@@ -177,7 +177,7 @@ def test_keyword_as_column_names(self):
df = DataFrame({'From':np.ones(5)})
#print sql.get_sqlite_schema(df, 'testkeywords')
sql.write_frame(df, con = self.db, name = 'testkeywords')
-
+
if __name__ == '__main__':
# unittest.main()
import nose
diff --git a/pandas/io/tests/test_yahoo.py b/pandas/io/tests/test_yahoo.py
index fab15ef6e2c75..9b15e19dd5410 100644
--- a/pandas/io/tests/test_yahoo.py
+++ b/pandas/io/tests/test_yahoo.py
@@ -8,10 +8,12 @@
import pandas.io.data as pd
import nose
from pandas.util.testing import network
+from numpy.testing.decorators import slow
import urllib2
class TestYahoo(unittest.TestCase):
+ @slow
@network
def test_yahoo(self):
# asserts that yahoo is minimally working and that it throws
diff --git a/pandas/sparse/tests/test_array.py b/pandas/sparse/tests/test_array.py
index 459732147f0d8..48bd492574f75 100644
--- a/pandas/sparse/tests/test_array.py
+++ b/pandas/sparse/tests/test_array.py
@@ -20,7 +20,7 @@ def assert_sp_array_equal(left, right):
class TestSparseArray(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.arr_data = np.array([nan, nan, 1, 2, 3, nan, 4, 5, nan, 6])
self.arr = SparseArray(self.arr_data)
diff --git a/pandas/sparse/tests/test_list.py b/pandas/sparse/tests/test_list.py
index 2b061badae94a..18a0a55c2db2a 100644
--- a/pandas/sparse/tests/test_list.py
+++ b/pandas/sparse/tests/test_list.py
@@ -14,6 +14,8 @@ def assert_sp_list_equal(left, right):
class TestSparseList(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.na_data = np.array([nan, nan, 1, 2, 3, nan, 4, 5, nan, 6])
self.zero_data = np.array([0, 0, 1, 2, 3, 0, 4, 5, 0, 6])
diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py
index b96719227d4cc..eeadcbea08466 100644
--- a/pandas/sparse/tests/test_sparse.py
+++ b/pandas/sparse/tests/test_sparse.py
@@ -110,7 +110,7 @@ def assert_sp_panel_equal(left, right, exact_indices=True):
class TestSparseSeries(TestCase,
test_series.CheckNameIntegration):
-
+ _multiprocess_can_split_ = True
def setUp(self):
arr, index = _test_data1()
@@ -686,7 +686,7 @@ class TestSparseTimeSeries(TestCase):
class TestSparseDataFrame(TestCase, test_frame.SafeForSparse):
klass = SparseDataFrame
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.data = {'A' : [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
'B' : [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
@@ -851,7 +851,37 @@ def test_sparse_series_ops(self):
tmp = sys.stderr
sys.stderr = buf
try:
- self._check_all(self._check_frame_ops)
+ self._check_frame_ops(self.frame)
+ finally:
+ sys.stderr = tmp
+
+ def test_sparse_series_ops_i(self):
+ import sys
+ buf = StringIO()
+ tmp = sys.stderr
+ sys.stderr = buf
+ try:
+ self._check_frame_ops(self.iframe)
+ finally:
+ sys.stderr = tmp
+
+ def test_sparse_series_ops_z(self):
+ import sys
+ buf = StringIO()
+ tmp = sys.stderr
+ sys.stderr = buf
+ try:
+ self._check_frame_ops(self.zframe)
+ finally:
+ sys.stderr = tmp
+
+ def test_sparse_series_ops_fill(self):
+ import sys
+ buf = StringIO()
+ tmp = sys.stderr
+ sys.stderr = buf
+ try:
+ self._check_frame_ops(self.fill_frame)
finally:
sys.stderr = tmp
@@ -1406,7 +1436,7 @@ def panel_data3():
class TestSparsePanel(TestCase,
test_panel.SafeForLongAndSparse,
test_panel.SafeForSparse):
-
+ _multiprocess_can_split_ = True
@classmethod
def assert_panel_equal(cls, x, y):
assert_sp_panel_equal(x, y)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index eb397a069a4e2..3bf64e5d3d8ce 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -10,7 +10,7 @@
class TestMatch(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_ints(self):
values = np.array([0, 2, 1])
to_match = np.array([0, 1, 2, 2, 0, 1, 3, 0])
@@ -29,7 +29,7 @@ def test_strings(self):
class TestUnique(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_ints(self):
arr = np.random.randint(0, 100, size=50)
@@ -70,4 +70,3 @@ def test_quantile():
import nose
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index dd93666cba0af..46fb6950b1fdb 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -13,6 +13,8 @@
from pandas.util import py3compat
+_multiprocess_can_split_ = True
+
def test_is_sequence():
is_seq=com._is_sequence
assert(is_seq((1,2)))
@@ -246,6 +248,8 @@ def test_pprint_thing():
class TestTake(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def test_1d_with_out(self):
def _test_dtype(dtype):
out = np.empty(5, dtype=dtype)
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 6b91fa1e78a1c..5c13092181d8a 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -7,7 +7,7 @@
import nose
class TestConfig(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def __init__(self,*args):
super(TestConfig,self).__init__(*args)
diff --git a/pandas/tests/test_factor.py b/pandas/tests/test_factor.py
index 863eaf19f0f6d..550225318e43f 100644
--- a/pandas/tests/test_factor.py
+++ b/pandas/tests/test_factor.py
@@ -16,7 +16,7 @@
class TestCategorical(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.factor = Categorical.from_array(['a', 'b', 'b', 'a',
'a', 'c', 'c', 'c'])
@@ -115,5 +115,3 @@ def test_na_flags_int_levels(self):
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
# '--with-coverage', '--cover-package=pandas.core'],
exit=False)
-
-
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index d265074c12f35..9fc8051b2eff5 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -28,7 +28,7 @@ def curpath():
return pth
class TestDataFrameFormatting(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.frame = _frame.copy()
@@ -825,7 +825,7 @@ def test_to_latex(self):
self.frame.to_latex()
class TestSeriesFormatting(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.ts = tm.makeTimeSeries()
@@ -940,7 +940,7 @@ def test_mixed_datetime64(self):
self.assertTrue('2012-01-01' in result)
class TestEngFormatter(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_eng_float_formatter(self):
df = DataFrame({'A' : [1.41, 141., 14100, 1410000.]})
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 8518575497174..7c48edba93d5c 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -32,6 +32,8 @@
import pandas.util.testing as tm
import pandas.lib as lib
+from numpy.testing.decorators import slow
+
def _skip_if_no_scipy():
try:
import scipy.stats
@@ -45,6 +47,8 @@ def _skip_if_no_scipy():
class CheckIndexing(object):
+ _multiprocess_can_split_ = True
+
def test_getitem(self):
# slicing
@@ -1370,6 +1374,8 @@ def test_nested_exception(self):
class SafeForSparse(object):
+ _multiprocess_can_split_ = True
+
def test_getitem_pop_assign_name(self):
s = self.frame['A']
self.assertEqual(s.name, 'A')
@@ -1493,6 +1499,8 @@ class TestDataFrame(unittest.TestCase, CheckIndexing,
SafeForSparse):
klass = DataFrame
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.frame = _frame.copy()
self.frame2 = _frame2.copy()
@@ -2868,7 +2876,7 @@ def test_nonzero(self):
del df['A']
self.assertFalse(df.empty)
- def test_repr(self):
+ def test_repr_empty(self):
buf = StringIO()
# empty
@@ -2878,23 +2886,15 @@ def test_repr(self):
frame = DataFrame(index=np.arange(1000))
foo = repr(frame)
- # small one
- foo = repr(self.frame)
- self.frame.info(verbose=False, buf=buf)
-
- # even smaller
- self.frame.reindex(columns=['A']).info(verbose=False, buf=buf)
- self.frame.reindex(columns=['A', 'B']).info(verbose=False, buf=buf)
-
- # big one
- biggie = DataFrame(np.zeros((200, 4)), columns=range(4),
- index=range(200))
- foo = repr(biggie)
+ def test_repr_mixed(self):
+ buf = StringIO()
# mixed
foo = repr(self.mixed_frame)
self.mixed_frame.info(verbose=False, buf=buf)
+ @slow
+ def test_repr_mixed_big(self):
# big mixed
biggie = DataFrame({'A' : randn(200),
'B' : tm.makeStringIndex(200)},
@@ -2904,6 +2904,17 @@ def test_repr(self):
foo = repr(biggie)
+ def test_repr(self):
+ buf = StringIO()
+
+ # small one
+ foo = repr(self.frame)
+ self.frame.info(verbose=False, buf=buf)
+
+ # even smaller
+ self.frame.reindex(columns=['A']).info(verbose=False, buf=buf)
+ self.frame.reindex(columns=['A', 'B']).info(verbose=False, buf=buf)
+
# exhausting cases in DataFrame.info
# columns but no index
@@ -2913,6 +2924,16 @@ def test_repr(self):
# no columns or index
self.empty.info(buf=buf)
+ @slow
+ def test_repr_big(self):
+ buf = StringIO()
+
+ # big one
+ biggie = DataFrame(np.zeros((200, 4)), columns=range(4),
+ index=range(200))
+ foo = repr(biggie)
+
+ def test_repr_unsortable(self):
# columns are not sortable
unsortable = DataFrame({'foo' : [1] * 50,
@@ -3599,7 +3620,7 @@ def test_float_none_comparison(self):
self.assertRaises(TypeError, df.__eq__, None)
def test_to_csv_from_csv(self):
- path = '__tmp__'
+ path = '__tmp_to_csv_from_csv__'
self.frame['A'][:5] = nan
@@ -3663,7 +3684,7 @@ def test_to_csv_from_csv(self):
os.remove(path)
def test_to_csv_multiindex(self):
- path = '__tmp__'
+ path = '__tmp_to_csv_multiindex__'
frame = self.frame
old_index = frame.index
@@ -3713,11 +3734,13 @@ def test_to_csv_multiindex(self):
self.assert_(recons.columns.equals(exp.columns))
self.assert_(len(recons) == 0)
+ os.remove(path)
+
def test_to_csv_float32_nanrep(self):
df = DataFrame(np.random.randn(1, 4).astype(np.float32))
df[1] = np.nan
- pth = '__tmp__.csv'
+ pth = '__tmp_to_csv_float32_nanrep__.csv'
df.to_csv(pth, na_rep=999)
lines = open(pth).readlines()
@@ -3726,7 +3749,7 @@ def test_to_csv_float32_nanrep(self):
def test_to_csv_withcommas(self):
- path = '__tmp__'
+ path = '__tmp_to_csv_withcommas__'
# Commas inside fields should be correctly escaped when saving as CSV.
df = DataFrame({'A':[1,2,3], 'B':['5,6','7,8','9,0']})
@@ -3737,7 +3760,7 @@ def test_to_csv_withcommas(self):
os.remove(path)
def test_to_csv_bug(self):
- path = '__tmp__.csv'
+ path = '__tmp_to_csv_bug__.csv'
f1 = StringIO('a,1.0\nb,2.0')
df = DataFrame.from_csv(f1,header=None)
newdf = DataFrame({'t': df[df.columns[0]]})
@@ -3749,7 +3772,7 @@ def test_to_csv_bug(self):
os.remove(path)
def test_to_csv_unicode(self):
- path = '__tmp__.csv'
+ path = '__tmp_to_csv_unicode__.csv'
df = DataFrame({u'c/\u03c3':[1,2,3]})
df.to_csv(path, encoding='UTF-8')
df2 = pan.read_csv(path, index_col=0, encoding='UTF-8')
@@ -3781,7 +3804,7 @@ def test_to_csv_stringio(self):
assert_frame_equal(recons, self.frame)
def test_to_csv_float_format(self):
- filename = '__tmp__.csv'
+ filename = '__tmp_to_csv_float_format__.csv'
df = DataFrame([[0.123456, 0.234567, 0.567567],
[12.32112, 123123.2, 321321.2]],
index=['A', 'B'], columns=['X', 'Y', 'Z'])
@@ -3860,9 +3883,17 @@ def test_to_csv_line_terminators(self):
self.assertEqual(buf.getvalue(), expected)
+ def test_excel_roundtrip_xls(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+ self._check_extension('xls')
- def test_to_excel_from_excel(self):
+ def test_excel_roundtrip_xlsx(self):
try:
import xlwt
import xlrd
@@ -3870,124 +3901,332 @@ def test_to_excel_from_excel(self):
except ImportError:
raise nose.SkipTest
- for ext in ['xls', 'xlsx']:
- path = '__tmp__.' + ext
+ self._check_extension('xlsx')
- self.frame['A'][:5] = nan
+ def _check_extension(self, ext):
+ path = '__tmp_to_excel_from_excel__.' + ext
- self.frame.to_excel(path,'test1')
- self.frame.to_excel(path,'test1', cols=['A', 'B'])
- self.frame.to_excel(path,'test1', header=False)
- self.frame.to_excel(path,'test1', index=False)
+ self.frame['A'][:5] = nan
- # test roundtrip
- self.frame.to_excel(path,'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, has_index_names=True)
- assert_frame_equal(self.frame, recons)
+ self.frame.to_excel(path,'test1')
+ self.frame.to_excel(path,'test1', cols=['A', 'B'])
+ self.frame.to_excel(path,'test1', header=False)
+ self.frame.to_excel(path,'test1', index=False)
- self.frame.to_excel(path,'test1', index=False)
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=None)
- recons.index = self.frame.index
- assert_frame_equal(self.frame, recons)
+ # test roundtrip
+ self.frame.to_excel(path,'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_names=True)
+ assert_frame_equal(self.frame, recons)
- # self.frame.to_excel(path,'test1')
- # reader = ExcelFile(path)
- # recons = reader.parse('test1', index_col=0, skiprows=[2], has_index_names=True)
- # assert_frame_equal(self.frame.ix[1:], recons)
+ self.frame.to_excel(path,'test1', index=False)
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=None)
+ recons.index = self.frame.index
+ assert_frame_equal(self.frame, recons)
- self.frame.to_excel(path,'test1',na_rep='NA')
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, na_values=['NA'], has_index_names=True)
- assert_frame_equal(self.frame, recons)
+ self.frame.to_excel(path,'test1',na_rep='NA')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, na_values=['NA'],
+ has_index_names=True)
+ assert_frame_equal(self.frame, recons)
- self.mixed_frame.to_excel(path,'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, has_index_names=True)
- assert_frame_equal(self.mixed_frame, recons)
+ os.remove(path)
- self.tsframe.to_excel(path, 'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1')
- assert_frame_equal(self.tsframe, recons)
+ def test_excel_roundtrip_xls_mixed(self):
+ try:
+ import xlwt
+ import xlrd
+ except ImportError:
+ raise nose.SkipTest
- #Test np.int64, values read come back as float
- frame = DataFrame(np.random.randint(-10,10,size=(10,2)))
- frame.to_excel(path,'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1').astype(np.int64)
- assert_frame_equal(frame, recons)
+ self._check_extension_mixed('xls')
- #Test reading/writing np.bool8, roundtrip only works for xlsx
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
- frame.to_excel(path,'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1').astype(np.bool8)
- assert_frame_equal(frame, recons)
-
- # Test writing to separate sheets
- writer = ExcelWriter(path)
- self.frame.to_excel(writer,'test1')
- self.tsframe.to_excel(writer,'test2')
- writer.save()
- reader = ExcelFile(path)
- recons = reader.parse('test1',index_col=0, has_index_names=True)
- assert_frame_equal(self.frame, recons)
- recons = reader.parse('test2',index_col=0)
- assert_frame_equal(self.tsframe, recons)
- np.testing.assert_equal(2, len(reader.sheet_names))
- np.testing.assert_equal('test1', reader.sheet_names[0])
- np.testing.assert_equal('test2', reader.sheet_names[1])
-
- # column aliases
- col_aliases = Index(['AA', 'X', 'Y', 'Z'])
- self.frame2.to_excel(path, 'test1', header=col_aliases)
- reader = ExcelFile(path)
- rs = reader.parse('test1', index_col=0, has_index_names=True)
- xp = self.frame2.copy()
- xp.columns = col_aliases
- assert_frame_equal(xp, rs)
-
- # test index_label
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
- frame.to_excel(path, 'test1', index_label=['test'])
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, has_index_names=True).astype(np.int64)
- frame.index.names = ['test']
- self.assertEqual(frame.index.names, recons.index.names)
+ def test_excel_roundtrip_xlsx_mixed(self):
+ try:
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
- frame.to_excel(path, 'test1', index_label=['test', 'dummy', 'dummy2'])
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, has_index_names=True).astype(np.int64)
- frame.index.names = ['test']
- self.assertEqual(frame.index.names, recons.index.names)
+ self._check_extension_mixed('xlsx')
- frame = (DataFrame(np.random.randn(10,2)) >= 0)
- frame.to_excel(path, 'test1', index_label='test')
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, has_index_names=True).astype(np.int64)
- frame.index.names = ['test']
- self.assertEqual(frame.index.names, recons.index.names)
+ def _check_extension_mixed(self, ext):
+ path = '__tmp_to_excel_from_excel_mixed__.' + ext
- #test index_labels in same row as column names
- self.frame.to_excel('/tmp/tests.xls', 'test1', cols=['A', 'B', 'C', 'D'], index=False)
- #take 'A' and 'B' as indexes (they are in same row as cols 'C', 'D')
- df = self.frame.copy()
- df = df.set_index(['A', 'B'])
+ self.mixed_frame.to_excel(path,'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_names=True)
+ assert_frame_equal(self.mixed_frame, recons)
+ os.remove(path)
- reader = ExcelFile('/tmp/tests.xls')
- recons = reader.parse('test1', index_col=[0, 1])
- assert_frame_equal(df, recons)
+ def test_excel_roundtrip_xls_tsframe(self):
+ try:
+ import xlwt
+ import xlrd
+ except ImportError:
+ raise nose.SkipTest
+ self._check_extension_tsframe('xls')
+ def test_excel_roundtrip_xlsx_tsframe(self):
+ try:
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
- os.remove(path)
+ self._check_extension_tsframe('xlsx')
+
+ def _check_extension_tsframe(self, ext):
+ path = '__tmp_to_excel_from_excel_tsframe__.' + ext
+
+ self.tsframe.to_excel(path, 'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1')
+ assert_frame_equal(self.tsframe, recons)
+
+ os.remove(path)
+
+ def test_excel_roundtrip_xls_int64(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_int64('xls')
+
+ def test_excel_roundtrip_xlsx_int64(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_int64('xlsx')
+
+ def _check_extension_int64(self, ext):
+ path = '__tmp_to_excel_from_excel_int64__.' + ext
+
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path,'test1')
+ self.frame.to_excel(path,'test1', cols=['A', 'B'])
+ self.frame.to_excel(path,'test1', header=False)
+ self.frame.to_excel(path,'test1', index=False)
+
+ #Test np.int64, values read come back as float
+ frame = DataFrame(np.random.randint(-10,10,size=(10,2)))
+ frame.to_excel(path,'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1').astype(np.int64)
+ assert_frame_equal(frame, recons)
+
+ os.remove(path)
+
+ def test_excel_roundtrip_xls_bool(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_bool('xls')
+
+ def test_excel_roundtrip_xlsx_bool(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_bool('xlsx')
+
+
+ def _check_extension_bool(self, ext):
+ path = '__tmp_to_excel_from_excel_bool__.' + ext
+
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path,'test1')
+ self.frame.to_excel(path,'test1', cols=['A', 'B'])
+ self.frame.to_excel(path,'test1', header=False)
+ self.frame.to_excel(path,'test1', index=False)
+
+ #Test reading/writing np.bool8, roundtrip only works for xlsx
+ frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame.to_excel(path,'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1').astype(np.bool8)
+ assert_frame_equal(frame, recons)
+
+ os.remove(path)
+
+ def test_excel_roundtrip_xls_sheets(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_sheets('xls')
+
+ def test_excel_roundtrip_xlsx_sheets(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_sheets('xlsx')
+
+
+ def _check_extension_sheets(self, ext):
+ path = '__tmp_to_excel_from_excel_sheets__.' + ext
+
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path,'test1')
+ self.frame.to_excel(path,'test1', cols=['A', 'B'])
+ self.frame.to_excel(path,'test1', header=False)
+ self.frame.to_excel(path,'test1', index=False)
+
+ # Test writing to separate sheets
+ writer = ExcelWriter(path)
+ self.frame.to_excel(writer,'test1')
+ self.tsframe.to_excel(writer,'test2')
+ writer.save()
+ reader = ExcelFile(path)
+ recons = reader.parse('test1',index_col=0, has_index_names=True)
+ assert_frame_equal(self.frame, recons)
+ recons = reader.parse('test2',index_col=0)
+ assert_frame_equal(self.tsframe, recons)
+ np.testing.assert_equal(2, len(reader.sheet_names))
+ np.testing.assert_equal('test1', reader.sheet_names[0])
+ np.testing.assert_equal('test2', reader.sheet_names[1])
+
+ os.remove(path)
+
+ def test_excel_roundtrip_xls_colaliases(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_colaliases('xls')
+
+ def test_excel_roundtrip_xlsx_colaliases(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_colaliases('xlsx')
+
+ def _check_extension_colaliases(self, ext):
+ path = '__tmp_to_excel_from_excel_aliases__.' + ext
+
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path,'test1')
+ self.frame.to_excel(path,'test1', cols=['A', 'B'])
+ self.frame.to_excel(path,'test1', header=False)
+ self.frame.to_excel(path,'test1', index=False)
+
+ # column aliases
+ col_aliases = Index(['AA', 'X', 'Y', 'Z'])
+ self.frame2.to_excel(path, 'test1', header=col_aliases)
+ reader = ExcelFile(path)
+ rs = reader.parse('test1', index_col=0, has_index_names=True)
+ xp = self.frame2.copy()
+ xp.columns = col_aliases
+ assert_frame_equal(xp, rs)
+
+ os.remove(path)
+
+ def test_excel_roundtrip_xls_indexlabels(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_indexlabels('xls')
+
+ def test_excel_roundtrip_xlsx_indexlabels(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_extension_indexlabels('xlsx')
+
+ def _check_extension_indexlabels(self, ext):
+ path = '__tmp_to_excel_from_excel_indexlabels__.' + ext
+
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path,'test1')
+ self.frame.to_excel(path,'test1', cols=['A', 'B'])
+ self.frame.to_excel(path,'test1', header=False)
+ self.frame.to_excel(path,'test1', index=False)
+
+ # test index_label
+ frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame.to_excel(path, 'test1', index_label=['test'])
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_names=True).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame.to_excel(path, 'test1', index_label=['test', 'dummy', 'dummy2'])
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_names=True).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame.to_excel(path, 'test1', index_label='test')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_names=True).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ #test index_labels in same row as column names
+ self.frame.to_excel('/tmp/tests.xls', 'test1', cols=['A', 'B', 'C', 'D'], index=False)
+ #take 'A' and 'B' as indexes (they are in same row as cols 'C', 'D')
+ df = self.frame.copy()
+ df = df.set_index(['A', 'B'])
+
+ reader = ExcelFile('/tmp/tests.xls')
+ recons = reader.parse('test1', index_col=[0, 1])
+ assert_frame_equal(df, recons)
+
+ os.remove(path)
+
+ def test_excel_roundtrip_datetime(self):
+ try:
+ import xlwt
+ import xlrd
+ except ImportError:
+ raise nose.SkipTest
# datetime.date, not sure what to test here exactly
- path = '__tmp__.xls'
+ path = '__tmp_excel_roundtrip_datetime__.xls'
tsf = self.tsframe.copy()
tsf.index = [x.date() for x in self.tsframe.index]
tsf.to_excel(path, 'test1')
@@ -3996,8 +4235,14 @@ def test_to_excel_from_excel(self):
assert_frame_equal(self.tsframe, recons)
os.remove(path)
+ def test_excel_roundtrip_bool(self):
+ try:
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
#Test roundtrip np.bool8, does not seem to work for xls
- path = '__tmp__.xlsx'
+ path = '__tmp_excel_roundtrip_bool__.xlsx'
frame = (DataFrame(np.random.randn(10,2)) >= 0)
frame.to_excel(path,'test1')
reader = ExcelFile(path)
@@ -4014,7 +4259,7 @@ def test_to_excel_periodindex(self):
raise nose.SkipTest
for ext in ['xls', 'xlsx']:
- path = '__tmp__.' + ext
+ path = '__tmp_to_excel_periodindex__.' + ext
frame = self.tsframe
xp = frame.resample('M', kind='period')
xp.to_excel(path, 'sht1')
@@ -4028,76 +4273,81 @@ def test_to_excel_multiindex(self):
try:
import xlwt
import xlrd
+ except ImportError:
+ raise nose.SkipTest
+
+ self._check_excel_multiindex('xls')
+
+ def test_to_excel_multiindex_xlsx(self):
+ try:
import openpyxl
except ImportError:
raise nose.SkipTest
- for ext in ['xls', 'xlsx']:
- path = '__tmp__.' + ext
+ self._check_excel_multiindex('xlsx')
- frame = self.frame
- old_index = frame.index
- arrays = np.arange(len(old_index)*2).reshape(2,-1)
- new_index = MultiIndex.from_arrays(arrays,
- names=['first', 'second'])
- frame.index = new_index
- frame.to_excel(path, 'test1', header=False)
- frame.to_excel(path, 'test1', cols=['A', 'B'])
-
- # round trip
- frame.to_excel(path, 'test1')
- reader = ExcelFile(path)
- df = reader.parse('test1', index_col=[0,1], parse_dates=False, has_index_names=True)
- assert_frame_equal(frame, df)
- self.assertEqual(frame.index.names, df.index.names)
- self.frame.index = old_index # needed if setUP becomes a classmethod
-
- # try multiindex with dates
- tsframe = self.tsframe
- old_index = tsframe.index
- new_index = [old_index, np.arange(len(old_index))]
- tsframe.index = MultiIndex.from_arrays(new_index)
-
- tsframe.to_excel(path, 'test1', index_label = ['time','foo'])
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=[0,1], has_index_names=True)
- assert_frame_equal(tsframe, recons)
+ def _check_excel_multiindex(self, ext):
+ path = '__tmp_to_excel_multiindex__' + ext + '__.'+ext
- # infer index
- tsframe.to_excel(path, 'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1')
- assert_frame_equal(tsframe, recons)
+ frame = self.frame
+ old_index = frame.index
+ arrays = np.arange(len(old_index)*2).reshape(2,-1)
+ new_index = MultiIndex.from_arrays(arrays,
+ names=['first', 'second'])
+ frame.index = new_index
+ frame.to_excel(path, 'test1', header=False)
+ frame.to_excel(path, 'test1', cols=['A', 'B'])
+ # round trip
+ frame.to_excel(path, 'test1')
+ reader = ExcelFile(path)
+ df = reader.parse('test1', index_col=[0,1], parse_dates=False, has_index_names=True)
+ assert_frame_equal(frame, df)
+ self.assertEqual(frame.index.names, df.index.names)
+ self.frame.index = old_index # needed if setUP becomes a classmethod
- # no index
- #TODO : mention this does not make sence anymore
- #with the new formatting as we are not alligning colnames and indexlabels
- #on the same row
+ os.remove(path)
- # tsframe.index.names = ['first', 'second']
- # tsframe.to_excel(path, 'test1')
- # reader = ExcelFile(path)
- # recons = reader.parse('test1')
- # assert_almost_equal(tsframe.values,
- # recons.ix[:, tsframe.columns].values)
- # self.assertEqual(len(tsframe.columns) + 2, len(recons.columns))
+ def test_to_excel_multiindex_dates(self):
+ try:
+ import xlwt
+ import xlrd
+ except ImportError:
+ raise nose.SkipTest
- # tsframe.index.names = [None, None]
+ self._check_excel_multiindex_dates('xls')
- # # no index
- # tsframe.to_excel(path, 'test1', index=False)
- # reader = ExcelFile(path)
- # recons = reader.parse('test1', index_col=None)
- # assert_almost_equal(recons.values, self.tsframe.values)
+ def test_to_excel_multiindex_xlsx_dates(self):
+ try:
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
- self.tsframe.index = old_index # needed if setUP becomes classmethod
+ self._check_excel_multiindex_dates('xlsx')
- # write a big DataFrame
- df = DataFrame(np.random.randn(1005, 1))
- df.to_excel(path, 'test1')
+ def _check_excel_multiindex_dates(self, ext):
+ path = '__tmp_to_excel_multiindex_dates__' + ext + '__.' + ext
- os.remove(path)
+ # try multiindex with dates
+ tsframe = self.tsframe
+ old_index = tsframe.index
+ new_index = [old_index, np.arange(len(old_index))]
+ tsframe.index = MultiIndex.from_arrays(new_index)
+
+ tsframe.to_excel(path, 'test1', index_label = ['time','foo'])
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=[0,1], has_index_names=True)
+ assert_frame_equal(tsframe, recons)
+
+ # infer index
+ tsframe.to_excel(path, 'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1')
+ assert_frame_equal(tsframe, recons)
+
+ self.tsframe.index = old_index # needed if setUP becomes classmethod
+
+ os.remove(path)
def test_to_excel_float_format(self):
try:
@@ -4108,7 +4358,7 @@ def test_to_excel_float_format(self):
raise nose.SkipTest
for ext in ['xls', 'xlsx']:
- filename = '__tmp__.' + ext
+ filename = '__tmp_to_excel_float_format__.' + ext
df = DataFrame([[0.123456, 0.234567, 0.567567],
[12.32112, 123123.2, 321321.2]],
index=['A', 'B'], columns=['X', 'Y', 'Z'])
@@ -4215,7 +4465,7 @@ def test_to_excel_header_styling_xls(self):
except ImportError:
raise nose.SkipTest
- filename = '__tmp__.xls'
+ filename = '__tmp_to_excel_header_styling_xls__.xls'
pdf.to_excel(filename, 'test1')
@@ -4271,7 +4521,7 @@ def test_to_excel_header_styling_xlsx(self):
raise nose.SkipTest
# test xlsx_styling
- filename = '__tmp__.xlsx'
+ filename = '__tmp_to_excel_header_styling_xlsx__.xlsx'
pdf.to_excel(filename, 'test1')
wbk = openpyxl.load_workbook(filename)
@@ -4485,30 +4735,50 @@ def test_copy(self):
# self.assert_(self.frame.columns.name is None)
- def test_corr(self):
+ def _check_method(self, method='pearson', check_minp=False):
+ if not check_minp:
+ correls = self.frame.corr(method=method)
+ exp = self.frame['A'].corr(self.frame['C'], method=method)
+ assert_almost_equal(correls['A']['C'], exp)
+ else:
+ result = self.frame.corr(min_periods=len(self.frame) - 8)
+ expected = self.frame.corr()
+ expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan
+
+ def test_corr_pearson(self):
_skip_if_no_scipy()
self.frame['A'][:5] = nan
self.frame['B'][5:10] = nan
- def _check_method(method='pearson', check_minp=False):
- if not check_minp:
- correls = self.frame.corr(method=method)
- exp = self.frame['A'].corr(self.frame['C'], method=method)
- assert_almost_equal(correls['A']['C'], exp)
- else:
- result = self.frame.corr(min_periods=len(self.frame) - 8)
- expected = self.frame.corr()
- expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan
+ self._check_method('pearson')
+
+ def test_corr_kendall(self):
+ _skip_if_no_scipy()
+ self.frame['A'][:5] = nan
+ self.frame['B'][5:10] = nan
+
+ self._check_method('kendall')
- _check_method('pearson')
- _check_method('kendall')
- _check_method('spearman')
+ def test_corr_spearman(self):
+ _skip_if_no_scipy()
+ self.frame['A'][:5] = nan
+ self.frame['B'][5:10] = nan
+
+ self._check_method('spearman')
+
+ def test_corr_non_numeric(self):
+ _skip_if_no_scipy()
+ self.frame['A'][:5] = nan
+ self.frame['B'][5:10] = nan
# exclude non-numeric types
result = self.mixed_frame.corr()
expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr()
assert_frame_equal(result, expected)
+ def test_corr_nooverlap(self):
+ _skip_if_no_scipy()
+
# nothing in common
for meth in ['pearson', 'kendall', 'spearman']:
df = DataFrame({'A': [1, 1.5, 1, np.nan, np.nan, np.nan],
@@ -4519,6 +4789,9 @@ def _check_method(method='pearson', check_minp=False):
self.assert_(rs.ix['A', 'A'] == 1)
self.assert_(rs.ix['B', 'B'] == 1)
+ def test_corr_constant(self):
+ _skip_if_no_scipy()
+
# constant --> all NA
for meth in ['pearson', 'spearman']:
@@ -4527,6 +4800,7 @@ def _check_method(method='pearson', check_minp=False):
rs = df.corr(meth)
self.assert_(isnull(rs.values).all())
+ def test_corr_int(self):
# dtypes other than float64 #1761
df3 = DataFrame({"a":[1,2,3,4], "b":[1,2,3,4]})
@@ -5545,61 +5819,80 @@ def test_align(self):
self.assertRaises(ValueError, self.frame.align, af.ix[0,:3],
join='inner', axis=2)
- def test_align_fill_method(self):
- def _check_align(a, b, axis, fill_axis, how, method, limit=None):
- aa, ab = a.align(b, axis=axis, join=how, method=method, limit=limit,
- fill_axis=fill_axis)
-
- join_index, join_columns = None, None
-
- ea, eb = a, b
- if axis is None or axis == 0:
- join_index = a.index.join(b.index, how=how)
- ea = ea.reindex(index=join_index)
- eb = eb.reindex(index=join_index)
-
- if axis is None or axis == 1:
- join_columns = a.columns.join(b.columns, how=how)
- ea = ea.reindex(columns=join_columns)
- eb = eb.reindex(columns=join_columns)
-
- ea = ea.fillna(axis=fill_axis, method=method, limit=limit)
- eb = eb.fillna(axis=fill_axis, method=method, limit=limit)
-
- assert_frame_equal(aa, ea)
- assert_frame_equal(ab, eb)
-
- for kind in JOIN_TYPES:
- for meth in ['pad', 'bfill']:
- for ax in [0, 1, None]:
- for fax in [0, 1]:
- left = self.frame.ix[0:4, :10]
- right = self.frame.ix[2:, 6:]
- empty = self.frame.ix[:0, :0]
-
- _check_align(left, right, axis=ax, fill_axis=fax,
- how=kind, method=meth)
- _check_align(left, right, axis=ax, fill_axis=fax,
- how=kind, method=meth, limit=1)
-
- # empty left
- _check_align(empty, right, axis=ax, fill_axis=fax,
- how=kind, method=meth)
- _check_align(empty, right, axis=ax, fill_axis=fax,
- how=kind, method=meth, limit=1)
-
-
- # empty right
- _check_align(left, empty, axis=ax, fill_axis=fax,
- how=kind, method=meth)
- _check_align(left, empty, axis=ax, fill_axis=fax,
- how=kind, method=meth, limit=1)
-
- # both empty
- _check_align(empty, empty, axis=ax, fill_axis=fax,
- how=kind, method=meth)
- _check_align(empty, empty, axis=ax, fill_axis=fax,
- how=kind, method=meth, limit=1)
+ def _check_align(self, a, b, axis, fill_axis, how, method, limit=None):
+ aa, ab = a.align(b, axis=axis, join=how, method=method, limit=limit,
+ fill_axis=fill_axis)
+
+ join_index, join_columns = None, None
+
+ ea, eb = a, b
+ if axis is None or axis == 0:
+ join_index = a.index.join(b.index, how=how)
+ ea = ea.reindex(index=join_index)
+ eb = eb.reindex(index=join_index)
+
+ if axis is None or axis == 1:
+ join_columns = a.columns.join(b.columns, how=how)
+ ea = ea.reindex(columns=join_columns)
+ eb = eb.reindex(columns=join_columns)
+
+ ea = ea.fillna(axis=fill_axis, method=method, limit=limit)
+ eb = eb.fillna(axis=fill_axis, method=method, limit=limit)
+
+ assert_frame_equal(aa, ea)
+ assert_frame_equal(ab, eb)
+
+ def test_align_fill_method_inner(self):
+ for meth in ['pad', 'bfill']:
+ for ax in [0, 1, None]:
+ for fax in [0, 1]:
+ self._check_align_fill('inner', meth, ax, fax)
+
+ def test_align_fill_method_outer(self):
+ for meth in ['pad', 'bfill']:
+ for ax in [0, 1, None]:
+ for fax in [0, 1]:
+ self._check_align_fill('outer', meth, ax, fax)
+
+ def test_align_fill_method_left(self):
+ for meth in ['pad', 'bfill']:
+ for ax in [0, 1, None]:
+ for fax in [0, 1]:
+ self._check_align_fill('left', meth, ax, fax)
+
+ def test_align_fill_method_right(self):
+ for meth in ['pad', 'bfill']:
+ for ax in [0, 1, None]:
+ for fax in [0, 1]:
+ self._check_align_fill('right', meth, ax, fax)
+
+ def _check_align_fill(self, kind, meth, ax, fax):
+ left = self.frame.ix[0:4, :10]
+ right = self.frame.ix[2:, 6:]
+ empty = self.frame.ix[:0, :0]
+
+ self._check_align(left, right, axis=ax, fill_axis=fax,
+ how=kind, method=meth)
+ self._check_align(left, right, axis=ax, fill_axis=fax,
+ how=kind, method=meth, limit=1)
+
+ # empty left
+ self._check_align(empty, right, axis=ax, fill_axis=fax,
+ how=kind, method=meth)
+ self._check_align(empty, right, axis=ax, fill_axis=fax,
+ how=kind, method=meth, limit=1)
+
+ # empty right
+ self._check_align(left, empty, axis=ax, fill_axis=fax,
+ how=kind, method=meth)
+ self._check_align(left, empty, axis=ax, fill_axis=fax,
+ how=kind, method=meth, limit=1)
+
+ # both empty
+ self._check_align(empty, empty, axis=ax, fill_axis=fax,
+ how=kind, method=meth)
+ self._check_align(empty, empty, axis=ax, fill_axis=fax,
+ how=kind, method=meth, limit=1)
def test_align_int_fill_bug(self):
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 1a7e38b4f44db..5015d833a16e5 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -43,6 +43,8 @@ def commonSetUp(self):
class TestGroupBy(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.ts = tm.makeTimeSeries()
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 8e65b9362905b..aece984454c13 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -25,7 +25,7 @@
from pandas.lib import Timestamp
class TestIndex(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.unicodeIndex = tm.makeUnicodeIndex(100)
self.strIndex = tm.makeStringIndex(100)
@@ -509,7 +509,7 @@ def test_boolean_cmp(self):
self.assert_(not isinstance(res, Index))
class TestInt64Index(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.index = Int64Index(np.arange(0, 20, 2))
@@ -872,7 +872,7 @@ def test_bytestring_with_unicode(self):
str(idx)
class TestMultiIndex(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
major_axis = Index(['foo', 'bar', 'baz', 'qux'])
minor_axis = Index(['one', 'two'])
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 31ffcc5832758..6078201d692df 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -55,6 +55,8 @@ def get_dt_ex(cols=['h']):
class TestBlock(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.fblock = get_float_ex()
self.cblock = get_complex_ex()
@@ -194,6 +196,8 @@ def test_repr(self):
class TestBlockManager(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.blocks = [get_float_ex(),
get_obj_ex(),
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 9bef76a68156e..37bc893bc71dc 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -19,6 +19,8 @@
class TestMultiLevel(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
['one', 'two', 'three']],
diff --git a/pandas/tests/test_ndframe.py b/pandas/tests/test_ndframe.py
index 70a5d79d2c428..c1fd16f5bc0b0 100644
--- a/pandas/tests/test_ndframe.py
+++ b/pandas/tests/test_ndframe.py
@@ -7,6 +7,8 @@
class TestNDFrame(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
tdf = t.makeTimeDataFrame()
self.ndf = NDFrame(tdf._data)
@@ -27,4 +29,3 @@ def test_astype(self):
import nose
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 82c6ea65d133a..eb4894ec5173e 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -36,7 +36,7 @@ def test_cumsum(self):
assert_frame_equal(cumsum['ItemA'], self.panel['ItemA'].cumsum())
class SafeForLongAndSparse(object):
-
+ _multiprocess_can_split_ = True
def test_repr(self):
foo = repr(self.panel)
@@ -143,7 +143,7 @@ def wrapper(x):
self.assertRaises(Exception, f, axis=obj.ndim)
class SafeForSparse(object):
-
+ _multiprocess_can_split_ = True
@classmethod
def assert_panel_equal(cls, x, y):
assert_panel_equal(x, y)
@@ -346,7 +346,7 @@ def test_abs(self):
class CheckIndexing(object):
-
+ _multiprocess_can_split_ = True
def test_getitem(self):
self.assertRaises(Exception, self.panel.__getitem__, 'ItemQ')
@@ -662,7 +662,7 @@ def test_set_value(self):
class TestPanel(unittest.TestCase, PanelTests, CheckIndexing,
SafeForLongAndSparse,
SafeForSparse):
-
+ _multiprocess_can_split_ = True
@classmethod
def assert_panel_equal(cls,x, y):
assert_panel_equal(x, y)
@@ -1344,7 +1344,7 @@ class TestLongPanel(unittest.TestCase):
"""
LongPanel no longer exists, but...
"""
-
+ _multiprocess_can_split_ = True
def setUp(self):
panel = tm.makePanel()
tm.add_nans(panel)
diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py
index 3cb600ff69513..ee6bf7bb078fb 100644
--- a/pandas/tests/test_reshape.py
+++ b/pandas/tests/test_reshape.py
@@ -17,6 +17,8 @@
from pandas.core.reshape import melt, convert_dummies, lreshape
import pandas.util.testing as tm
+_multiprocess_can_split_ = True
+
def test_melt():
df = tm.makeTimeDataFrame()[:10]
df['id1'] = (df['A'] > 0).astype(int)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index ac690c651249b..3494d5b130fab 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -40,6 +40,8 @@ def _skip_if_no_scipy():
class CheckNameIntegration(object):
+ _multiprocess_can_split_ = True
+
def test_scalarop_preserve_name(self):
result = self.ts * 2
self.assertEquals(result.name, self.ts.name)
@@ -150,13 +152,13 @@ def test_name_printing(self):
self.assert_(not "Name:" in repr(s))
def test_pickle_preserve_name(self):
- unpickled = self._pickle_roundtrip(self.ts)
+ unpickled = self._pickle_roundtrip_name(self.ts)
self.assertEquals(unpickled.name, self.ts.name)
- def _pickle_roundtrip(self, obj):
- obj.save('__tmp__')
- unpickled = Series.load('__tmp__')
- os.remove('__tmp__')
+ def _pickle_roundtrip_name(self, obj):
+ obj.save('__tmp_name__')
+ unpickled = Series.load('__tmp_name__')
+ os.remove('__tmp_name__')
return unpickled
def test_argsort_preserve_name(self):
@@ -173,6 +175,8 @@ def test_to_sparse_pass_name(self):
class TestNanops(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def test_comparisons(self):
left = np.random.randn(10)
right = np.random.randn(10)
@@ -220,6 +224,8 @@ class SafeForSparse(object):
class TestSeries(unittest.TestCase, CheckNameIntegration):
+ _multiprocess_can_split_ = True
+
def setUp(self):
self.ts = tm.makeTimeSeries()
self.ts.name = 'ts'
@@ -492,9 +498,9 @@ def test_pickle(self):
assert_series_equal(unp_ts, self.ts)
def _pickle_roundtrip(self, obj):
- obj.save('__tmp__')
- unpickled = Series.load('__tmp__')
- os.remove('__tmp__')
+ obj.save('__tmp_pickle_roundtrip__')
+ unpickled = Series.load('__tmp_pickle_roundtrip__')
+ os.remove('__tmp_pickle_roundtrip__')
return unpickled
def test_getitem_get(self):
@@ -2128,28 +2134,29 @@ def test_rank(self):
assert_series_equal(iranks, exp)
def test_from_csv(self):
- self.ts.to_csv('_foo')
- ts = Series.from_csv('_foo')
+ path = '_foo_from_csv'
+ self.ts.to_csv(path)
+ ts = Series.from_csv(path)
assert_series_equal(self.ts, ts)
- self.series.to_csv('_foo')
- series = Series.from_csv('_foo')
+ self.series.to_csv(path)
+ series = Series.from_csv(path)
self.assert_(series.name is None)
self.assert_(series.index.name is None)
assert_series_equal(self.series, series)
- outfile = open('_foo', 'w')
+ outfile = open(path, 'w')
outfile.write('1998-01-01|1.0\n1999-01-01|2.0')
outfile.close()
- series = Series.from_csv('_foo',sep='|')
+ series = Series.from_csv(path, sep='|')
checkseries = Series({datetime(1998,1,1): 1.0, datetime(1999,1,1): 2.0})
assert_series_equal(checkseries, series)
- series = Series.from_csv('_foo',sep='|',parse_dates=False)
+ series = Series.from_csv(path, sep='|',parse_dates=False)
checkseries = Series({'1998-01-01': 1.0, '1999-01-01': 2.0})
assert_series_equal(checkseries, series)
- os.remove('_foo')
+ os.remove(path)
def test_to_csv(self):
self.ts.to_csv('_foo')
@@ -3207,6 +3214,8 @@ def test_numpy_unique(self):
class TestSeriesNonUnique(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def setUp(self):
pass
diff --git a/pandas/tests/test_stats.py b/pandas/tests/test_stats.py
index 9d6f12288f1dd..ba37355afec7a 100644
--- a/pandas/tests/test_stats.py
+++ b/pandas/tests/test_stats.py
@@ -12,7 +12,7 @@
assert_almost_equal)
class TestRank(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
s = Series([1, 3, 4, 2, nan, 2, 1, 5, nan, 3])
df = DataFrame({'A': s, 'B': s})
@@ -115,4 +115,3 @@ def test_rank_int(self):
if __name__ == '__main__':
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 9972d0b3b96b1..beebb22c18b86 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -22,6 +22,8 @@
class TestStringMethods(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
def test_cat(self):
one = ['a', 'a', 'b', 'b', 'c', NA]
two = ['a', NA, 'b', 'd', 'foo', NA]
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
index 9061402bb6050..057cd4306dd62 100644
--- a/pandas/tests/test_tseries.py
+++ b/pandas/tests/test_tseries.py
@@ -10,7 +10,7 @@
from datetime import datetime
class TestTseriesUtil(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_combineFunc(self):
pass
@@ -387,7 +387,7 @@ def test_series_bin_grouper():
assert_almost_equal(counts, exp_counts)
class TestBinGroupers(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.obj = np.random.randn(10, 1)
self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64)
@@ -511,7 +511,7 @@ def test_try_parse_dates():
class TestTypeInference(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_length_zero(self):
result = lib.infer_dtype(np.array([], dtype='i4'))
self.assertEqual(result, 'empty')
@@ -636,4 +636,3 @@ def test_int_index(self):
import nose
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index 45de93f00df15..86f1fe9a7a7f4 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -21,6 +21,8 @@
import pandas.lib as lib
from pandas.lib import Timestamp
+_multiprocess_can_split_ = True
+
def test_monthrange():
import calendar
for y in range(2000,2013):
@@ -72,7 +74,7 @@ def test_to_m8():
#####
class TestDateOffset(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.d = Timestamp(datetime(2008, 1, 2))
@@ -110,7 +112,7 @@ def test_eq(self):
self.assert_(not (offset1 == offset2))
class TestBusinessDay(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
self.d = datetime(2008, 1, 1)
@@ -1474,4 +1476,3 @@ def test_freq_offsets():
import nose
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 1512bfc91549f..5e954179127bf 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -30,7 +30,7 @@ def _skip_if_no_pytz():
class TestResample(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
dti = DatetimeIndex(start=datetime(2005,1,1),
end=datetime(2005,1,10), freq='Min')
@@ -587,7 +587,7 @@ def _simple_pts(start, end, freq='D'):
class TestResamplePeriodIndex(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_basic_downsample(self):
ts = _simple_pts('1/1/1990', '6/30/1995', freq='M')
result = ts.resample('a-dec')
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 84ff68fa20294..010b4ea5e5cfe 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -38,7 +38,7 @@
class TestTimeSeriesDuplicates(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
datetime(2000, 1, 2), datetime(2000, 1, 3),
@@ -160,7 +160,7 @@ def _skip_if_no_pytz():
class TestTimeSeries(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_dti_slicing(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
dti2 = dti[[1,3,5]]
@@ -1368,7 +1368,7 @@ def _simple_ts(start, end, freq='D'):
class TestDatetimeIndex(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def test_append_join_nondatetimeindex(self):
rng = date_range('1/1/2000', periods=10)
idx = Index(['a', 'b', 'c', 'd'])
@@ -1614,7 +1614,7 @@ def test_union_with_DatetimeIndex(self):
i2.union(i1) # Fails with "AttributeError: can't set attribute"
class TestLegacySupport(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
@classmethod
def setUpClass(cls):
if py3compat.PY3:
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 29fd76d399254..85f9d2b48f50e 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -62,7 +62,7 @@ def dst(self, dt):
fixed_off = FixedOffset(-420, '-07:00')
class TestTimeZoneSupport(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
_skip_if_no_pytz()
@@ -540,7 +540,7 @@ def test_dateutil_tzoffset_support(self):
self.assertEquals(series.index.tz, tzinfo)
class TestTimeZones(unittest.TestCase):
-
+ _multiprocess_can_split_ = True
def setUp(self):
_skip_if_no_pytz()
diff --git a/test_fast.sh b/test_fast.sh
index 830443dcd7fbf..1dc164429f395 100755
--- a/test_fast.sh
+++ b/test_fast.sh
@@ -1 +1 @@
-nosetests -A "not slow" pandas $*
\ No newline at end of file
+nosetests -A "not slow" pandas $*
| run test_multi.sh
| https://api.github.com/repos/pandas-dev/pandas/pulls/2400 | 2012-11-30T23:17:19Z | 2012-12-01T18:00:48Z | null | 2012-12-01T18:00:48Z |
Cmov | diff --git a/RELEASE.rst b/RELEASE.rst
index 7c546c9fc6485..d31a10a8fb102 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -36,7 +36,8 @@ pandas 0.10.0
for both Series and DataFrame (#2002)
- Add ``duplicated`` and ``drop_duplicates`` functions to Series (#1923)
- Add docs for ``HDFStore table`` format
- - 'density' property in `SparseSeries` (#2384)
+ - 'density' property in ``SparseSeries`` (#2384)
+ - Centered moving window functions via ``center`` keyword (#1270)
**API Changes**
diff --git a/pandas/src/moments.pyx b/pandas/src/moments.pyx
index efe42108dc7f6..136b63baf8b7b 100644
--- a/pandas/src/moments.pyx
+++ b/pandas/src/moments.pyx
@@ -952,3 +952,70 @@ def roll_generic(ndarray[float64_t, cast=True] input, int win,
bufarr.data = <char*> oldbuf
return output
+
+def roll_window(ndarray[float64_t, ndim=1, cast=True] input,
+ ndarray[float64_t, ndim=1, cast=True] weights,
+ int minp, bint avg=True, bint avg_wgt=False):
+ """
+ Assume len(weights) << len(input)
+ """
+ cdef:
+ ndarray[double_t] output, tot_wgt
+ ndarray[int64_t] counts
+ Py_ssize_t in_i, win_i, win_n, win_k, in_n, in_k
+ float64_t val_in, val_win, c, w
+
+ in_n = len(input)
+ win_n = len(weights)
+ output = np.zeros(in_n, dtype=float)
+ counts = np.zeros(in_n, dtype=int)
+ if avg:
+ tot_wgt = np.zeros(in_n, dtype=float)
+
+ minp = _check_minp(len(weights), minp, in_n)
+
+ if avg_wgt:
+ for win_i from 0 <= win_i < win_n:
+ val_win = weights[win_i]
+ if val_win != val_win:
+ continue
+
+ for in_i from 0 <= in_i < in_n - (win_n - win_i) + 1:
+ val_in = input[in_i]
+ if val_in == val_in:
+ output[in_i + (win_n - win_i) - 1] += val_in * val_win
+ counts[in_i + (win_n - win_i) - 1] += 1
+ tot_wgt[in_i + (win_n - win_i) - 1] += val_win
+
+ for in_i from 0 <= in_i < in_n:
+ c = counts[in_i]
+ if c < minp:
+ output[in_i] = NaN
+ else:
+ w = tot_wgt[in_i]
+ if w == 0:
+ output[in_i] = NaN
+ else:
+ output[in_i] /= tot_wgt[in_i]
+
+ else:
+ for win_i from 0 <= win_i < win_n:
+ val_win = weights[win_i]
+ if val_win != val_win:
+ continue
+
+ for in_i from 0 <= in_i < in_n - (win_n - win_i) + 1:
+ val_in = input[in_i]
+
+ if val_in == val_in:
+ output[in_i + (win_n - win_i) - 1] += val_in * val_win
+ counts[in_i + (win_n - win_i) - 1] += 1
+
+ for in_i from 0 <= in_i < in_n:
+ c = counts[in_i]
+ if c < minp:
+ output[in_i] = NaN
+ elif avg:
+ output[in_i] /= c
+
+ return output
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index b805a9dca128c..91ca3be79f8da 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -9,8 +9,9 @@
from numpy import NaN
import numpy as np
-from pandas.core.api import DataFrame, Series, notnull
+from pandas.core.api import DataFrame, Series, notnull, Panel
import pandas.lib as lib
+import pandas.core.common as com
from pandas.util.decorators import Substitution, Appender
@@ -18,7 +19,7 @@
'rolling_sum', 'rolling_mean', 'rolling_std', 'rolling_cov',
'rolling_corr', 'rolling_var', 'rolling_skew', 'rolling_kurt',
'rolling_quantile', 'rolling_median', 'rolling_apply',
- 'rolling_corr_pairwise',
+ 'rolling_corr_pairwise', 'rolling_window',
'ewma', 'ewmvar', 'ewmstd', 'ewmvol', 'ewmcorr', 'ewmcov',
'expanding_count', 'expanding_max', 'expanding_min',
'expanding_sum', 'expanding_mean', 'expanding_std',
@@ -124,7 +125,7 @@
"""
-def rolling_count(arg, window, freq=None, time_rule=None):
+def rolling_count(arg, window, freq=None, center=False, time_rule=None):
"""
Rolling count of number of non-NaN observations inside provided window.
@@ -134,6 +135,8 @@ def rolling_count(arg, window, freq=None, time_rule=None):
window : Number of observations used for calculating statistic
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
Returns
-------
@@ -146,7 +149,7 @@ def rolling_count(arg, window, freq=None, time_rule=None):
converted = np.isfinite(values).astype(float)
result = rolling_sum(converted, window, min_periods=1,
- time_rule=time_rule)
+ center=center) # already converted
# putmask here?
result[np.isnan(result)] = 0
@@ -156,22 +159,37 @@ def rolling_count(arg, window, freq=None, time_rule=None):
@Substitution("Unbiased moving covariance", _binary_arg_flex, _flex_retval)
@Appender(_doc_template)
-def rolling_cov(arg1, arg2, window, min_periods=None, time_rule=None):
+def rolling_cov(arg1, arg2, window, min_periods=None, freq=None,
+ center=False, time_rule=None):
+ arg1 = _conv_timerule(arg1, freq, time_rule)
+ arg2 = _conv_timerule(arg2, freq, time_rule)
+ window = min(window, len(arg1), len(arg2))
def _get_cov(X, Y):
- mean = lambda x: rolling_mean(x, window, min_periods, time_rule)
- count = rolling_count(X + Y, window, time_rule)
+ mean = lambda x: rolling_mean(x, window, min_periods)
+ count = rolling_count(X + Y, window)
bias_adj = count / (count - 1)
return (mean(X * Y) - mean(X) * mean(Y)) * bias_adj
- return _flex_binary_moment(arg1, arg2, _get_cov)
-
+ rs = _flex_binary_moment(arg1, arg2, _get_cov)
+ if center:
+ if isinstance(rs, (Series, DataFrame, Panel)):
+ rs = rs.shift(-int((window + 1) / 2.))
+ else:
+ offset = int((window + 1) / 2.)
+ rs[:-offset] = rs[offset:]
+ rs[-offset:] = np.nan
+ return rs
@Substitution("Moving sample correlation", _binary_arg_flex, _flex_retval)
@Appender(_doc_template)
-def rolling_corr(arg1, arg2, window, min_periods=None, time_rule=None):
+def rolling_corr(arg1, arg2, window, min_periods=None, freq=None,
+ center=False, time_rule=None):
def _get_corr(a, b):
- num = rolling_cov(a, b, window, min_periods, time_rule)
- den = (rolling_std(a, window, min_periods, time_rule) *
- rolling_std(b, window, min_periods, time_rule))
+ num = rolling_cov(a, b, window, min_periods, freq=freq,
+ center=center, time_rule=time_rule)
+ den = (rolling_std(a, window, min_periods, freq=freq,
+ center=center, time_rule=time_rule) *
+ rolling_std(b, window, min_periods, freq=freq,
+ center=center, time_rule=time_rule))
return num / den
return _flex_binary_moment(arg1, arg2, _get_corr)
@@ -234,7 +252,7 @@ def rolling_corr_pairwise(df, window, min_periods=None):
def _rolling_moment(arg, window, func, minp, axis=0, freq=None,
- time_rule=None, **kwargs):
+ center=False, time_rule=None, **kwargs):
"""
Rolling statistical measure using supplied function. Designed to be
used with passed-in Cython array-based functions.
@@ -249,6 +267,8 @@ def _rolling_moment(arg, window, func, minp, axis=0, freq=None,
axis : int, default 0
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
Returns
-------
@@ -260,8 +280,28 @@ def _rolling_moment(arg, window, func, minp, axis=0, freq=None,
# actually calculate the moment. Faster way to do this?
result = np.apply_along_axis(calc, axis, values)
- return return_hook(result)
+ rs = return_hook(result)
+ if center:
+ rs = _center_window(rs, window, axis)
+ return rs
+def _center_window(rs, window, axis):
+ offset = int((window - 1) / 2.)
+ if isinstance(rs, (Series, DataFrame, Panel)):
+ rs = rs.shift(-offset, axis=axis)
+ else:
+ rs_indexer = [slice(None)] * rs.ndim
+ rs_indexer[axis] = slice(None, -offset)
+
+ lead_indexer = [slice(None)] * rs.ndim
+ lead_indexer[axis] = slice(offset, None)
+
+ na_indexer = [slice(None)] * rs.ndim
+ na_indexer[axis] = slice(-offset, None)
+
+ rs[rs_indexer] = rs[lead_indexer]
+ rs[na_indexer] = np.nan
+ return rs
def _process_data_structure(arg, kill_inf=True):
if isinstance(arg, DataFrame):
@@ -450,12 +490,14 @@ def _rolling_func(func, desc, check_minp=_use_window):
@Substitution(desc, _unary_arg, _type_of_input)
@Appender(_doc_template)
@wraps(func)
- def f(arg, window, min_periods=None, freq=None, time_rule=None, **kwargs):
+ def f(arg, window, min_periods=None, freq=None, center=False,
+ time_rule=None, **kwargs):
def call_cython(arg, window, minp, **kwds):
minp = check_minp(minp, window)
return func(arg, window, minp, **kwds)
return _rolling_moment(arg, window, call_cython, min_periods,
- freq=freq, time_rule=time_rule, **kwargs)
+ freq=freq, center=center,
+ time_rule=time_rule, **kwargs)
return f
@@ -477,7 +519,7 @@ def call_cython(arg, window, minp, **kwds):
def rolling_quantile(arg, window, quantile, min_periods=None, freq=None,
- time_rule=None):
+ center=False, time_rule=None):
"""Moving quantile
Parameters
@@ -489,6 +531,8 @@ def rolling_quantile(arg, window, quantile, min_periods=None, freq=None,
Minimum number of observations in window required to have a value
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
Returns
-------
@@ -499,11 +543,11 @@ def call_cython(arg, window, minp):
minp = _use_window(minp, window)
return lib.roll_quantile(arg, window, minp, quantile)
return _rolling_moment(arg, window, call_cython, min_periods,
- freq=freq, time_rule=time_rule)
+ freq=freq, center=center, time_rule=time_rule)
def rolling_apply(arg, window, func, min_periods=None, freq=None,
- time_rule=None):
+ center=False, time_rule=None):
"""Generic moving function application
Parameters
@@ -516,6 +560,8 @@ def rolling_apply(arg, window, func, min_periods=None, freq=None,
Minimum number of observations in window required to have a value
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
Returns
-------
@@ -525,21 +571,120 @@ def call_cython(arg, window, minp):
minp = _use_window(minp, window)
return lib.roll_generic(arg, window, minp, func)
return _rolling_moment(arg, window, call_cython, min_periods,
- freq=freq, time_rule=time_rule)
+ freq=freq, center=center, time_rule=time_rule)
+
+def rolling_window(arg, window=None, win_type=None, min_periods=None,
+ freq=None, center=False, mean=True, time_rule=None,
+ axis=0, **kwargs):
+ """
+ Applies a centered moving window of type ``window_type`` and size ``window``
+ on the data.
+
+ Parameters
+ ----------
+ arg : Series, DataFrame
+ window : int or ndarray
+ Filtering window specification. If the window is an integer, then it is
+ treated as the window length and win_type is required
+ win_type : str, default None
+ Window type (see Notes)
+ min_periods : int
+ Minimum number of observations in window required to have a value.
+ freq : None or string alias / date offset object, default=None
+ Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
+ mean : boolean, default True
+ If True computes weighted mean, else weighted sum
+
+ Returns
+ -------
+ y : type of input argument
+
+ Notes
+ -----
+ The recognized window types are:
+
+ * ``boxcar``
+ * ``triang``
+ * ``blackman``
+ * ``hamming``
+ * ``bartlett``
+ * ``parzen``
+ * ``bohman``
+ * ``blackmanharris``
+ * ``nuttall``
+ * ``barthann``
+ * ``kaiser`` (needs beta)
+ * ``gaussian`` (needs std)
+ * ``general_gaussian`` (needs power, width)
+ * ``slepian`` (needs width).
+ """
+ if isinstance(window, (list, tuple, np.ndarray)):
+ if win_type is not None:
+ raise ValueError(('Do not specify window type if using custom '
+ 'weights'))
+ window = com._asarray_tuplesafe(window).astype(float)
+ elif com.is_integer(window): #window size
+ if win_type is None:
+ raise ValueError('Must specify window type')
+ try:
+ import scipy.signal as sig
+ except ImportError:
+ raise ImportError('Please install scipy to generate window weight')
+ win_type = _validate_win_type(win_type, kwargs) # may pop from kwargs
+ window = sig.get_window(win_type, window).astype(float)
+ else:
+ raise ValueError('Invalid window %s' % str(window))
+
+ minp = _use_window(min_periods, len(window))
+
+ arg = _conv_timerule(arg, freq, time_rule)
+ return_hook, values = _process_data_structure(arg)
+
+ f = lambda x: lib.roll_window(x, window, minp, avg=mean)
+ result = np.apply_along_axis(f, axis, values)
+
+ rs = return_hook(result)
+ if center:
+ rs = _center_window(rs, len(window), axis)
+ return rs
+
+def _validate_win_type(win_type, kwargs):
+ # may pop from kwargs
+ arg_map = {'kaiser' : ['beta'],
+ 'gaussian' : ['std'],
+ 'general_gaussian' : ['power', 'width'],
+ 'slepian' : ['width']}
+ if win_type in arg_map:
+ return tuple([win_type] +
+ _pop_args(win_type, arg_map[win_type], kwargs))
+ return win_type
+
+def _pop_args(win_type, arg_names, kwargs):
+ msg = '%s window requires %%s' % win_type
+ all_args = []
+ for n in arg_names:
+ if n not in kwargs:
+ raise ValueError(msg % n)
+ all_args.append(kwargs.pop(n))
+ return all_args
def _expanding_func(func, desc, check_minp=_use_window):
@Substitution(desc, _unary_arg, _type_of_input)
@Appender(_expanding_doc)
@wraps(func)
- def f(arg, min_periods=1, freq=None, time_rule=None, **kwargs):
+ def f(arg, min_periods=1, freq=None, center=False, time_rule=None,
+ **kwargs):
window = len(arg)
def call_cython(arg, window, minp, **kwds):
minp = check_minp(minp, window)
return func(arg, window, minp, **kwds)
return _rolling_moment(arg, window, call_cython, min_periods,
- freq=freq, time_rule=time_rule, **kwargs)
+ freq=freq, center=center,
+ time_rule=time_rule, **kwargs)
return f
@@ -560,7 +705,7 @@ def call_cython(arg, window, minp, **kwds):
check_minp=_require_min_periods(4))
-def expanding_count(arg, freq=None, time_rule=None):
+def expanding_count(arg, freq=None, center=False, time_rule=None):
"""
Expanding count of number of non-NaN observations.
@@ -569,16 +714,19 @@ def expanding_count(arg, freq=None, time_rule=None):
arg : DataFrame or numpy ndarray-like
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
Returns
-------
expanding_count : type of caller
"""
- return rolling_count(arg, len(arg), freq=freq, time_rule=time_rule)
+ return rolling_count(arg, len(arg), freq=freq, center=center,
+ time_rule=time_rule)
def expanding_quantile(arg, quantile, min_periods=1, freq=None,
- time_rule=None):
+ center=False, time_rule=None):
"""Expanding quantile
Parameters
@@ -589,29 +737,35 @@ def expanding_quantile(arg, quantile, min_periods=1, freq=None,
Minimum number of observations in window required to have a value
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
Returns
-------
y : type of input argument
"""
return rolling_quantile(arg, len(arg), quantile, min_periods=min_periods,
- freq=freq, time_rule=time_rule)
+ freq=freq, center=center, time_rule=time_rule)
@Substitution("Unbiased expanding covariance", _binary_arg_flex, _flex_retval)
@Appender(_expanding_doc)
-def expanding_cov(arg1, arg2, min_periods=1, time_rule=None):
+def expanding_cov(arg1, arg2, min_periods=1, freq=None, center=False,
+ time_rule=None):
window = max(len(arg1), len(arg2))
return rolling_cov(arg1, arg2, window,
- min_periods=min_periods, time_rule=time_rule)
+ min_periods=min_periods, freq=freq,
+ center=center, time_rule=time_rule)
@Substitution("Expanding sample correlation", _binary_arg_flex, _flex_retval)
@Appender(_expanding_doc)
-def expanding_corr(arg1, arg2, min_periods=1, time_rule=None):
+def expanding_corr(arg1, arg2, min_periods=1, freq=None, center=False,
+ time_rule=None):
window = max(len(arg1), len(arg2))
return rolling_corr(arg1, arg2, window,
- min_periods=min_periods, time_rule=time_rule)
+ min_periods=min_periods,
+ freq=freq, center=center, time_rule=time_rule)
def expanding_corr_pairwise(df, min_periods=1):
@@ -634,7 +788,8 @@ def expanding_corr_pairwise(df, min_periods=1):
return rolling_corr_pairwise(df, window, min_periods=min_periods)
-def expanding_apply(arg, func, min_periods=1, freq=None, time_rule=None):
+def expanding_apply(arg, func, min_periods=1, freq=None, center=False,
+ time_rule=None):
"""Generic expanding function application
Parameters
@@ -646,6 +801,8 @@ def expanding_apply(arg, func, min_periods=1, freq=None, time_rule=None):
Minimum number of observations in window required to have a value
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
+ center : boolean, default False
+ Whether the label should correspond with center of window
Returns
-------
@@ -653,4 +810,4 @@ def expanding_apply(arg, func, min_periods=1, freq=None, time_rule=None):
"""
window = len(arg)
return rolling_apply(arg, window, func, min_periods=min_periods, freq=freq,
- time_rule=time_rule)
+ center=center, time_rule=time_rule)
diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index b421d1083d6cc..788219be8bab6 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -8,7 +8,9 @@
import numpy as np
from pandas import Series, DataFrame, bdate_range, isnull, notnull
-from pandas.util.testing import assert_almost_equal, assert_series_equal
+from pandas.util.testing import (
+ assert_almost_equal, assert_series_equal, assert_frame_equal
+ )
from pandas.util.py3compat import PY3
import pandas.core.datetools as datetools
import pandas.stats.moments as mom
@@ -40,11 +42,130 @@ def test_rolling_count(self):
counter = lambda x: np.isfinite(x).astype(float).sum()
self._check_moment_func(mom.rolling_count, counter,
has_min_periods=False,
- preserve_nan=False)
+ preserve_nan=False,
+ fill_value=0)
def test_rolling_mean(self):
self._check_moment_func(mom.rolling_mean, np.mean)
+ def test_cmov_mean(self):
+ try:
+ from scikits.timeseries.lib import cmov_mean
+ except ImportError:
+ raise nose.SkipTest
+
+ vals = np.random.randn(10)
+ xp = cmov_mean(vals, 5)
+
+ rs = mom.rolling_mean(vals, 5, center=True)
+ assert_almost_equal(xp.compressed(), rs[2:-2])
+ assert_almost_equal(xp.mask, np.isnan(rs))
+
+ xp = Series(rs)
+ rs = mom.rolling_mean(Series(vals), 5, center=True)
+ assert_series_equal(xp, rs)
+
+ def test_cmov_window(self):
+ try:
+ from scikits.timeseries.lib import cmov_window
+ except ImportError:
+ raise nose.SkipTest
+
+ vals = np.random.randn(10)
+ xp = cmov_window(vals, 5, 'boxcar')
+
+ rs = mom.rolling_window(vals, 5, 'boxcar', center=True)
+ assert_almost_equal(xp.compressed(), rs[2:-2])
+ assert_almost_equal(xp.mask, np.isnan(rs))
+
+ xp = Series(rs)
+ rs = mom.rolling_window(Series(vals), 5, 'boxcar', center=True)
+ assert_series_equal(xp, rs)
+
+ def test_cmov_window_corner(self):
+ try:
+ from scikits.timeseries.lib import cmov_window
+ except ImportError:
+ raise nose.SkipTest
+
+ # all nan
+ vals = np.empty(10, dtype=float)
+ vals.fill(np.nan)
+ rs = mom.rolling_window(vals, 5, 'boxcar', center=True)
+ self.assert_(np.isnan(rs).all())
+
+ # empty
+ vals = np.array([])
+ rs = mom.rolling_window(vals, 5, 'boxcar', center=True)
+ self.assert_(len(rs) == 0)
+
+ # shorter than window
+ vals = np.random.randn(5)
+ rs = mom.rolling_window(vals, 10, 'boxcar')
+ self.assert_(np.isnan(rs).all())
+ self.assert_(len(rs) == 5)
+
+ def test_cmov_window_frame(self):
+ try:
+ from scikits.timeseries.lib import cmov_window
+ except ImportError:
+ raise nose.SkipTest
+
+ # DataFrame
+ vals = np.random.randn(10, 2)
+ xp = cmov_window(vals, 5, 'boxcar')
+ rs = mom.rolling_window(DataFrame(vals), 5, 'boxcar', center=True)
+ assert_frame_equal(DataFrame(xp), rs)
+
+ def test_cmov_window_na_min_periods(self):
+ try:
+ from scikits.timeseries.lib import cmov_window
+ except ImportError:
+ raise nose.SkipTest
+
+ # min_periods
+ vals = Series(np.random.randn(10))
+ vals[4] = np.nan
+ vals[8] = np.nan
+
+ xp = mom.rolling_mean(vals, 5, min_periods=4, center=True)
+ rs = mom.rolling_window(vals, 5, 'boxcar', min_periods=4, center=True)
+
+ assert_series_equal(xp, rs)
+
+ def test_cmov_window_regular(self):
+ try:
+ from scikits.timeseries.lib import cmov_window
+ except ImportError:
+ raise nose.SkipTest
+
+ win_types = ['triang', 'blackman', 'hamming', 'bartlett', 'bohman',
+ 'blackmanharris', 'nuttall', 'barthann']
+ for wt in win_types:
+ vals = np.random.randn(10)
+ xp = cmov_window(vals, 5, wt)
+
+ rs = mom.rolling_window(Series(vals), 5, wt, center=True)
+ assert_series_equal(Series(xp), rs)
+
+ def test_cmov_window_special(self):
+ try:
+ from scikits.timeseries.lib import cmov_window
+ except ImportError:
+ raise nose.SkipTest
+
+ win_types = ['kaiser', 'gaussian', 'general_gaussian', 'slepian']
+ kwds = [{'beta' : 1.}, {'std' : 1.}, {'power' : 2., 'width' : 2.},
+ {'width' : 0.5}]
+
+ for wt, k in zip(win_types, kwds):
+ vals = np.random.randn(10)
+ xp = cmov_window(vals, 5, (wt,) + tuple(k.values()))
+
+ rs = mom.rolling_window(Series(vals), 5, wt, center=True,
+ **k)
+ assert_series_equal(Series(xp), rs)
+
def test_rolling_median(self):
self._check_moment_func(mom.rolling_median, np.median)
@@ -76,10 +197,11 @@ def scoreatpercentile(a, per):
return values[int(idx)]
for q in qs:
- def f(x, window, min_periods=None, freq=None):
+ def f(x, window, min_periods=None, freq=None, center=False):
return mom.rolling_quantile(x, window, q,
- min_periods=min_periods,
- freq=freq)
+ min_periods=min_periods,
+ freq=freq,
+ center=center)
def alt(x):
return scoreatpercentile(x, q)
@@ -89,11 +211,12 @@ def test_rolling_apply(self):
ser = Series([])
assert_series_equal(ser, mom.rolling_apply(ser, 10, lambda x:x.mean()))
- def roll_mean(x, window, min_periods=None, freq=None):
+ def roll_mean(x, window, min_periods=None, freq=None, center=False):
return mom.rolling_apply(x, window,
- lambda x: x[np.isfinite(x)].mean(),
- min_periods=min_periods,
- freq=freq)
+ lambda x: x[np.isfinite(x)].mean(),
+ min_periods=min_periods,
+ freq=freq,
+ center=center)
self._check_moment_func(roll_mean, np.mean)
def test_rolling_apply_out_of_bounds(self):
@@ -185,20 +308,28 @@ def test_fperr_robustness(self):
def _check_moment_func(self, func, static_comp, window=50,
has_min_periods=True,
+ has_center=True,
has_time_rule=True,
- preserve_nan=True):
+ preserve_nan=True,
+ fill_value=None):
self._check_ndarray(func, static_comp, window=window,
has_min_periods=has_min_periods,
- preserve_nan=preserve_nan)
+ preserve_nan=preserve_nan,
+ has_center=has_center,
+ fill_value=fill_value)
self._check_structures(func, static_comp,
has_min_periods=has_min_periods,
- has_time_rule=has_time_rule)
+ has_time_rule=has_time_rule,
+ fill_value=fill_value,
+ has_center=has_center)
def _check_ndarray(self, func, static_comp, window=50,
has_min_periods=True,
- preserve_nan=True):
+ preserve_nan=True,
+ has_center=True,
+ fill_value=None):
result = func(self.arr, window)
assert_almost_equal(result[-1],
@@ -237,8 +368,30 @@ def _check_ndarray(self, func, static_comp, window=50,
result = func(arr, 50)
assert_almost_equal(result[-1], static_comp(arr[10:-10]))
+
+ if has_center:
+ if has_min_periods:
+ result = func(arr, 20, min_periods=15, center=True)
+ expected = func(arr, 20, min_periods=15)
+ else:
+ result = func(arr, 20, center=True)
+ expected = func(arr, 20)
+
+ assert_almost_equal(result[1], expected[10])
+ if fill_value is None:
+ self.assert_(np.isnan(result[-9:]).all())
+ else:
+ self.assert_((result[-9:] == 0).all())
+ if has_min_periods:
+ self.assert_(np.isnan(expected[23]))
+ self.assert_(np.isnan(result[14]))
+ self.assert_(np.isnan(expected[-5]))
+ self.assert_(np.isnan(result[-14]))
+
def _check_structures(self, func, static_comp,
- has_min_periods=True, has_time_rule=True):
+ has_min_periods=True, has_time_rule=True,
+ has_center=True,
+ fill_value=None):
series_result = func(self.series, 50)
self.assert_(isinstance(series_result, Series))
@@ -271,6 +424,31 @@ def _check_structures(self, func, static_comp,
assert_almost_equal(frame_result.xs(last_date),
trunc_frame.apply(static_comp))
+ if has_center:
+ if has_min_periods:
+ minp = 10
+ series_xp = func(self.series, 25, min_periods=minp).shift(-12)
+ frame_xp = func(self.frame, 25, min_periods=minp).shift(-12)
+
+ series_rs = func(self.series, 25, min_periods=minp,
+ center=True)
+ frame_rs = func(self.frame, 25, min_periods=minp,
+ center=True)
+
+ else:
+ series_xp = func(self.series, 25).shift(-12)
+ frame_xp = func(self.frame, 25).shift(-12)
+
+ series_rs = func(self.series, 25, center=True)
+ frame_rs = func(self.frame, 25, center=True)
+
+ if fill_value is not None:
+ series_xp = series_xp.fillna(fill_value)
+ frame_xp = frame_xp.fillna(fill_value)
+ assert_series_equal(series_xp, series_rs)
+ assert_frame_equal(frame_xp, frame_rs)
+
+
def test_legacy_time_rule_arg(self):
from StringIO import StringIO
# suppress deprecation warnings
| use `center` keyword and then `shift` if True.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2398 | 2012-11-30T20:13:05Z | 2012-12-10T15:24:12Z | 2012-12-10T15:24:12Z | 2014-07-02T10:35:21Z |
ENH: line_terminator parameter for DataFrame.to_csv | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 3f555c800d1eb..9d7c93eb53ef9 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1305,7 +1305,8 @@ def _helper_csvexcel(self, writer, na_rep=None, cols=None,
def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
cols=None, header=True, index=True, index_label=None,
- mode='w', nanRep=None, encoding=None, quoting=None):
+ mode='w', nanRep=None, encoding=None, quoting=None,
+ line_terminator='\n'):
"""
Write DataFrame to a comma-separated values (csv) file
@@ -1336,6 +1337,9 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
encoding : string, optional
a string representing the encoding to use if the contents are
non-ascii, for python versions prior to 3
+ line_terminator: string, default '\n'
+ The newline character or character sequence to use in the
+ output file
"""
if nanRep is not None: # pragma: no cover
import warnings
@@ -1355,12 +1359,12 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
try:
if encoding is not None:
- csvout = com.UnicodeWriter(f, lineterminator='\n',
+ csvout = com.UnicodeWriter(f, lineterminator=line_terminator,
delimiter=sep, encoding=encoding,
quoting=quoting)
else:
- csvout = csv.writer(f, lineterminator='\n', delimiter=sep,
- quoting=quoting)
+ csvout = csv.writer(f, lineterminator=line_terminator,
+ delimiter=sep, quoting=quoting)
self._helper_csvexcel(csvout, na_rep=na_rep,
float_format=float_format, cols=cols,
header=header, index=index,
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 9ac9a5b670b2e..4cc60d36deb7f 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3839,6 +3839,29 @@ def test_to_csv_index_no_leading_comma(self):
'three,3,6\n')
self.assertEqual(buf.getvalue(), expected)
+ def test_to_csv_line_terminators(self):
+ df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]},
+ index=['one', 'two', 'three'])
+
+ buf = StringIO()
+ df.to_csv(buf, line_terminator='\r\n')
+ expected = (',A,B\r\n'
+ 'one,1,4\r\n'
+ 'two,2,5\r\n'
+ 'three,3,6\r\n')
+ self.assertEqual(buf.getvalue(), expected)
+
+ buf = StringIO()
+ df.to_csv(buf) # The default line terminator remains \n
+ expected = (',A,B\n'
+ 'one,1,4\n'
+ 'two,2,5\n'
+ 'three,3,6\n')
+ self.assertEqual(buf.getvalue(), expected)
+
+
+
+
def test_to_excel_from_excel(self):
try:
import xlwt
| In response to #2326
(Changes are now PEP8-compliant!)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2394 | 2012-11-29T20:46:43Z | 2012-11-29T21:41:21Z | null | 2014-08-08T22:55:30Z |
CLN: add deprecation warning to set_printoptions, reset_printoptions | diff --git a/README.rst b/README.rst
index 78e849275aaf2..8d000a9410e22 100644
--- a/README.rst
+++ b/README.rst
@@ -77,6 +77,8 @@ Optional dependencies
* Needed for parts of :mod:`pandas.stats`
* `pytz <http://pytz.sourceforge.net/>`__
* Needed for time zone support with ``DateRange``
+ * `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
+ * Needed for Excel I/O
Installation from sources
=========================
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 98eb4746c7d79..a93ff92321a9e 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -122,13 +122,17 @@ API changes
- ``reset_option`` - reset one or more options to their default value. Partial names are accepted.
- ``describe_option`` - print a description of one or more options. When called with no arguments. print all registered options.
- Note: ``set_printoptions`` is now deprecated (but functioning), the print options now live under "print_config.XYZ". For example:
+ Note: ``set_printoptions``/ ``reset_printoptions`` are now deprecated (but functioning), the print options now live under "print.XYZ". For example:
.. ipython:: python
import pandas as pd
- pd.get_option("print_config.max_rows")
+ pd.get_option("print.max_rows")
+
+
+ - to_string() methods now always return unicode strings (GH2224_).
+
See the `full release notes
<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
@@ -138,3 +142,4 @@ on GitHub for a complete list.
.. _GH1996: https://github.com/pydata/pandas/issues/1996
.. _GH2316: https://github.com/pydata/pandas/issues/2316
.. _GH2097: https://github.com/pydata/pandas/issues/2097
+.. _GH2224: https://github.com/pydata/pandas/issues/2224
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 2654f74888a21..b93cf05c7e768 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1128,7 +1128,7 @@ def in_qtconsole():
# 2) If you need to send something to the console, use console_encode().
#
# console_encode() should (hopefully) choose the right encoding for you
-# based on the encoding set in option "print_config.encoding"
+# based on the encoding set in option "print.encoding"
#
# 3) if you need to write something out to file, use
# pprint_thing_encoded(encoding).
@@ -1187,10 +1187,10 @@ def pprint_thing(thing, _nest_lvl=0):
hasattr(thing,'next'):
return unicode(thing)
elif (isinstance(thing, dict) and
- _nest_lvl < get_option("print_config.pprint_nest_depth")):
+ _nest_lvl < get_option("print.pprint_nest_depth")):
result = _pprint_dict(thing, _nest_lvl)
elif _is_sequence(thing) and _nest_lvl < \
- get_option("print_config.pprint_nest_depth"):
+ get_option("print.pprint_nest_depth"):
result = _pprint_seq(thing, _nest_lvl)
else:
# when used internally in the package, everything
@@ -1222,8 +1222,8 @@ def console_encode(object):
this is the sanctioned way to prepare something for
sending *to the console*, it delegates to pprint_thing() to get
a unicode representation of the object relies on the global encoding
- set in print_config.encoding. Use this everywhere
+ set in print.encoding. Use this everywhere
where you output to the console.
"""
return pprint_thing_encoded(object,
- get_option("print_config.encoding"))
+ get_option("print.encoding"))
diff --git a/pandas/core/config.py b/pandas/core/config.py
index dff60162c42ce..ef0c540f99235 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -53,40 +53,25 @@
from collections import namedtuple
import warnings
-DeprecatedOption = namedtuple("DeprecatedOption", "key msg rkey removal_ver")
-RegisteredOption = namedtuple("RegisteredOption", "key defval doc validator")
+DeprecatedOption = namedtuple('DeprecatedOption', 'key msg rkey removal_ver')
+RegisteredOption = namedtuple('RegisteredOption', 'key defval doc validator')
+
+_deprecated_options = {} # holds deprecated option metdata
+_registered_options = {} # holds registered option metdata
+_global_config = {} # holds the current values for registered options
+_reserved_keys = ['all'] # keys which have a special meaning
-__deprecated_options = {} # holds deprecated option metdata
-__registered_options = {} # holds registered option metdata
-__global_config = {} # holds the current values for registered options
-__reserved_keys = ["all"] # keys which have a special meaning
##########################################
# User API
-
-def get_option(pat):
- """Retrieves the value of the specified option
-
- Parameters
- ----------
- pat - str/regexp which should match a single option.
-
- Returns
- -------
- result - the value of the option
-
- Raises
- ------
- KeyError if no such option exists
- """
-
+def _get_option(pat):
keys = _select_options(pat)
if len(keys) == 0:
_warn_if_deprecated(pat)
- raise KeyError("No such keys(s)")
+ raise KeyError('No such keys(s)')
if len(keys) > 1:
- raise KeyError("Pattern matched multiple keys")
+ raise KeyError('Pattern matched multiple keys')
key = keys[0]
_warn_if_deprecated(key)
@@ -99,27 +84,14 @@ def get_option(pat):
return root[k]
-def set_option(pat, value):
- """Sets the value of the specified option
+def _set_option(pat, value):
- Parameters
- ----------
- pat - str/regexp which should match a single option.
-
- Returns
- -------
- None
-
- Raises
- ------
- KeyError if no such option exists
- """
keys = _select_options(pat)
if len(keys) == 0:
_warn_if_deprecated(pat)
- raise KeyError("No such keys(s)")
+ raise KeyError('No such keys(s)')
if len(keys) > 1:
- raise KeyError("Pattern matched multiple keys")
+ raise KeyError('Pattern matched multiple keys')
key = keys[0]
_warn_if_deprecated(key)
@@ -134,73 +106,173 @@ def set_option(pat, value):
root[k] = value
-def describe_option(pat="",_print_desc=True):
- """ Prints the description for one or more registered options
+def _describe_option(pat='', _print_desc=True):
- Call with not arguments to get a listing for all registered options.
-
- Parameters
- ----------
- pat - str, a regexp pattern. All matching keys will have their
- description displayed.
-
- _print_desc - if True (default) the description(s) will be printed
- to stdout otherwise, the description(s) will be returned
- as a unicode string (for testing).
-
- Returns
- -------
- None by default, the description(s) as a unicode string if _print_desc
- is False
-
- """
keys = _select_options(pat)
if len(keys) == 0:
- raise KeyError("No such keys(s)")
+ raise KeyError('No such keys(s)')
- s=u""
- for k in keys: # filter by pat
+ s = u''
+ for k in keys: # filter by pat
s += _build_option_description(k)
if _print_desc:
- print(s)
+ print s
else:
- return(s)
+ return s
-def reset_option(pat):
- """Reset one or more options to their default value.
- pass "all" as argument to reset all options.
-
- Parameters
- ----------
- pat - str/regex if specified only options matching `prefix`* will be reset
-
- Returns
- -------
- None
+def _reset_option(pat):
- """
keys = _select_options(pat)
- if pat == u"":
- raise ValueError("You must provide a non-empty pattern")
+ if pat == u'':
+ raise ValueError('You must provide a non-empty pattern')
if len(keys) == 0:
- raise KeyError("No such keys(s)")
+ raise KeyError('No such keys(s)')
- if len(keys) > 1 and len(pat)<4 and pat != "all":
- raise ValueError("You must specify at least 4 characters "
- "when resetting multiple keys")
+ if len(keys) > 1 and len(pat) < 4 and pat != 'all':
+ raise ValueError('You must specify at least 4 characters when '
+ 'resetting multiple keys')
for k in keys:
- set_option(k, __registered_options[k].defval)
+ _set_option(k, _registered_options[k].defval)
+
+
+# For user convenience, we'd like to have the available options described
+# in the docstring. For dev convenience we'd like to generate the docstrings
+# dynamically instead of maintaining them by hand. To this, we use the
+# class below which wraps functions inside a callable, and converts
+# __doc__ into a propery function. The doctsrings below are templates
+# using the py2.6+ advanced formatting syntax to plug in a concise list
+# of options, and option descriptions.
+
+class CallableDyanmicDoc(object):
+
+ def __init__(self, func, doc_tmpl):
+ self.__doc_tmpl__ = doc_tmpl
+ self.__func__ = func
+
+ def __call__(self, *args, **kwds):
+ return self.__func__(*args, **kwds)
+
+ @property
+ def __doc__(self):
+ opts_desc = _describe_option('all', _print_desc=False)
+ opts_list = pp_options_list(_registered_options.keys())
+ return self.__doc_tmpl__.format(opts_desc=opts_desc,
+ opts_list=opts_list)
+
+_get_option_tmpl = """"get_option(pat) - Retrieves the value of the specified option
+
+Available options:
+{opts_list}
+
+Parameters
+----------
+pat - str/regexp which should match a single option.
+
+Note: partial matches are supported for convenience, but unless you use the
+full option name (e.g. x.y.z.option_name), your code may break in future
+versions if new options with similar names are introduced.
+
+Returns
+-------
+result - the value of the option
+
+Raises
+------
+KeyError if no such option exists
+
+{opts_desc}
+"""
+
+_set_option_tmpl="""set_option(pat,value) - Sets the value of the specified option
+
+Available options:
+{opts_list}
+
+Parameters
+----------
+pat - str/regexp which should match a single option.
+
+Note: partial matches are supported for convenience, but unless you use the
+full option name (e.g. x.y.z.option_name), your code may break in future
+versions if new options with similar names are introduced.
+
+value - new value of option.
+
+Returns
+-------
+None
+
+Raises
+------
+KeyError if no such option exists
+
+{opts_desc}
+"""
+
+_describe_option_tmpl="""describe_option(pat,_print_desc=False) Prints the description
+for one or more registered options.
+
+Call with not arguments to get a listing for all registered options.
+
+Available options:
+{opts_list}
+
+Parameters
+----------
+pat - str, a regexp pattern. All matching keys will have their
+ description displayed.
+
+_print_desc - if True (default) the description(s) will be printed
+ to stdout otherwise, the description(s) will be returned
+ as a unicode string (for testing).
+
+Returns
+-------
+None by default, the description(s) as a unicode string if _print_desc
+is False
+
+{opts_desc}
+"""
+
+_reset_option_tmpl="""reset_option(pat) - Reset one or more options to their default value.
+
+Pass "all" as argument to reset all options.
+
+Available options:
+{opts_list}
+
+Parameters
+----------
+pat - str/regex if specified only options matching `prefix`* will be reset
+
+Note: partial matches are supported for convenience, but unless you use the
+full option name (e.g. x.y.z.option_name), your code may break in future
+versions if new options with similar names are introduced.
+
+Returns
+-------
+None
+
+{opts_desc}
+"""
+
+# bind the functions with their docstrings into a Callable
+# and use that as the functions exposed in pd.api
+get_option = CallableDyanmicDoc(_get_option, _get_option_tmpl)
+set_option = CallableDyanmicDoc(_set_option, _set_option_tmpl)
+reset_option = CallableDyanmicDoc(_reset_option, _reset_option_tmpl)
+describe_option = CallableDyanmicDoc(_describe_option, _describe_option_tmpl)
+
######################################################
# Functions for use by pandas developers, in addition to User - api
-
-def register_option(key, defval, doc="", validator=None):
+def register_option(key, defval, doc='', validator=None):
"""Register an option in the package-wide pandas config object
Parameters
@@ -221,11 +293,11 @@ def register_option(key, defval, doc="", validator=None):
"""
- key=key.lower()
+ key = key.lower()
- if key in __registered_options:
+ if key in _registered_options:
raise KeyError("Option '%s' has already been registered" % key)
- if key in __reserved_keys:
+ if key in _reserved_keys:
raise KeyError("Option '%s' is a reserved key" % key)
# the default value should be legal
@@ -233,25 +305,25 @@ def register_option(key, defval, doc="", validator=None):
validator(defval)
# walk the nested dict, creating dicts as needed along the path
- path = key.split(".")
- cursor = __global_config
- for i,p in enumerate(path[:-1]):
- if not isinstance(cursor,dict):
- raise KeyError("Path prefix to option '%s' is already an option" %\
- ".".join(path[:i]))
+ path = key.split('.')
+ cursor = _global_config
+ for i, p in enumerate(path[:-1]):
+ if not isinstance(cursor, dict):
+ raise KeyError("Path prefix to option '%s' is already an option"
+ % '.'.join(path[:i]))
if not cursor.has_key(p):
cursor[p] = {}
cursor = cursor[p]
- if not isinstance(cursor,dict):
- raise KeyError("Path prefix to option '%s' is already an option" %\
- ".".join(path[:-1]))
+ if not isinstance(cursor, dict):
+ raise KeyError("Path prefix to option '%s' is already an option"
+ % '.'.join(path[:-1]))
- cursor[path[-1]] = defval # initialize
+ cursor[path[-1]] = defval # initialize
# save the option metadata
- __registered_options[key] = RegisteredOption(key=key, defval=defval,
- doc=doc, validator=validator)
+ _registered_options[key] = RegisteredOption(key=key, defval=defval,
+ doc=doc, validator=validator)
def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
@@ -292,12 +364,15 @@ def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
KeyError - if key has already been deprecated.
"""
- key=key.lower()
- if key in __deprecated_options:
- raise KeyError("Option '%s' has already been defined as deprecated." % key)
+ key = key.lower()
+
+ if key in _deprecated_options:
+ raise KeyError("Option '%s' has already been defined as deprecated."
+ % key)
+
+ _deprecated_options[key] = DeprecatedOption(key, msg, rkey, removal_ver)
- __deprecated_options[key] = DeprecatedOption(key, msg, rkey,removal_ver)
################################
# functions internal to the module
@@ -307,16 +382,17 @@ def _select_options(pat):
if pat=="all", returns all registered options
"""
- keys = sorted(__registered_options.keys())
- if pat == "all": # reserved key
+
+ keys = sorted(_registered_options.keys())
+ if pat == 'all': # reserved key
return keys
- return [k for k in keys if re.search(pat,k,re.I)]
+ return [k for k in keys if re.search(pat, k, re.I)]
def _get_root(key):
- path = key.split(".")
- cursor = __global_config
+ path = key.split('.')
+ cursor = _global_config
for p in path[:-1]:
cursor = cursor[p]
return cursor, path[-1]
@@ -326,7 +402,7 @@ def _is_deprecated(key):
""" Returns True if the given option has been deprecated """
key = key.lower()
- return __deprecated_options.has_key(key)
+ return _deprecated_options.has_key(key)
def _get_deprecated_option(key):
@@ -337,8 +413,9 @@ def _get_deprecated_option(key):
-------
DeprecatedOption (namedtuple) if key is deprecated, None otherwise
"""
+
try:
- d = __deprecated_options[key]
+ d = _deprecated_options[key]
except KeyError:
return None
else:
@@ -353,8 +430,9 @@ def _get_registered_option(key):
-------
RegisteredOption (namedtuple) if key is deprecated, None otherwise
"""
+
try:
- d = __registered_options[key]
+ d = _registered_options[key]
except KeyError:
return None
else:
@@ -366,6 +444,7 @@ def _translate_key(key):
if key id deprecated and a replacement key defined, will return the
replacement key, otherwise returns `key` as - is
"""
+
d = _get_deprecated_option(key)
if d:
return d.rkey or key
@@ -389,34 +468,65 @@ def _warn_if_deprecated(key):
else:
msg = "'%s' is deprecated" % key
if d.removal_ver:
- msg += " and will be removed in %s" % d.removal_ver
+ msg += ' and will be removed in %s' % d.removal_ver
if d.rkey:
- msg += (", please use '%s' instead." % (d.rkey))
+ msg += ", please use '%s' instead." % d.rkey
else:
- msg += (", please refrain from using it.")
+ msg += ', please refrain from using it.'
warnings.warn(msg, DeprecationWarning)
return True
return False
+
def _build_option_description(k):
""" Builds a formatted description of a registered option and prints it """
o = _get_registered_option(k)
d = _get_deprecated_option(k)
- s = u'%s: ' %k
+ s = u'%s: ' % k
if o.doc:
- s += "\n" +"\n ".join(o.doc.split("\n"))
+ s += '\n' + '\n '.join(o.doc.strip().split('\n'))
else:
- s += "No description available.\n"
+ s += 'No description available.\n'
if d:
- s += u"\n\t(Deprecated"
- s += u", use `%s` instead." % d.rkey if d.rkey else ""
- s += u")\n"
-
- s += "\n"
- return(s)
+ s += u'\n\t(Deprecated'
+ s += (u', use `%s` instead.' % d.rkey if d.rkey else '')
+ s += u')\n'
+
+ s += '\n'
+ return s
+
+
+def pp_options_list(keys, width=80, _print=False):
+ """ Builds a concise listing of available options, grouped by prefix """
+
+ from textwrap import wrap
+ from itertools import groupby
+
+ def pp(name, ks):
+ pfx = (name + '.[' if name else '')
+ ls = wrap(', '.join(ks), width, initial_indent=pfx,
+ subsequent_indent=' ' * len(pfx), break_long_words=False)
+ if ls and ls[-1] and name:
+ ls[-1] = ls[-1] + ']'
+ return ls
+
+ ls = []
+ singles = [x for x in sorted(keys) if x.find('.') < 0]
+ if singles:
+ ls += pp('', singles)
+ keys = [x for x in keys if x.find('.') >= 0]
+
+ for k, g in groupby(sorted(keys), lambda x: x[:x.rfind('.')]):
+ ks = [x[len(k) + 1:] for x in list(g)]
+ ls += pp(k, ks)
+ s = '\n'.join(ls)
+ if _print:
+ print s
+ else:
+ return s
##############
@@ -449,15 +559,18 @@ def config_prefix(prefix):
will register options "display.font.color", "display.font.size", set the
value of "display.font.size"... and so on.
"""
+
# Note: reset_option relies on set_option, and on key directly
# it does not fit in to this monkey-patching scheme
global register_option, get_option, set_option, reset_option
def wrap(func):
+
def inner(key, *args, **kwds):
- pkey="%s.%s" % (prefix, key)
+ pkey = '%s.%s' % (prefix, key)
return func(pkey, *args, **kwds)
+
return inner
_register_option = register_option
@@ -466,7 +579,7 @@ def inner(key, *args, **kwds):
set_option = wrap(set_option)
get_option = wrap(get_option)
register_option = wrap(register_option)
- yield
+ yield None
set_option = _set_option
get_option = _get_option
register_option = _register_option
@@ -474,6 +587,7 @@ def inner(key, *args, **kwds):
# These factories and methods are handy for use as the validator
# arg in register_option
+
def is_type_factory(_type):
"""
@@ -487,6 +601,7 @@ def is_type_factory(_type):
True if type(x) is equal to `_type`
"""
+
def inner(x):
if type(x) != _type:
raise ValueError("Value must have type '%s'" % str(_type))
@@ -507,12 +622,14 @@ def is_instance_factory(_type):
True if x is an instance of `_type`
"""
+
def inner(x):
if not isinstance(x, _type):
raise ValueError("Value must be an instance of '%s'" % str(_type))
return inner
+
# common type validators, for convenience
# usage: register_option(... , validator = is_int)
is_int = is_type_factory(int)
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index b0279a94983d9..a6739a1c450e9 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -1,5 +1,3 @@
-from __future__ import with_statement # support python 2.5
-
import pandas.core.config as cf
from pandas.core.config import is_int,is_bool,is_text,is_float
from pandas.core.format import detect_console_encoding
@@ -18,7 +16,7 @@
###########################################
-# options from the "print_config" namespace
+# options from the "print" namespace
pc_precision_doc="""
: int
@@ -79,7 +77,7 @@
these are generally strings meant to be displayed on the console.
"""
-with cf.config_prefix('print_config'):
+with cf.config_prefix('print'):
cf.register_option('precision', 7, pc_precision_doc, validator=is_int)
cf.register_option('digits', 7, validator=is_int)
cf.register_option('float_format', None)
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 47aa072ffd644..f62e68a0f18a1 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -73,7 +73,7 @@ def __init__(self, series, buf=None, header=True, length=True,
self.header = header
if float_format is None:
- float_format = get_option("print_config.float_format")
+ float_format = get_option("print.float_format")
self.float_format = float_format
def _get_footer(self):
@@ -148,7 +148,7 @@ def _encode_diff_func():
if py3compat.PY3: # pragma: no cover
_encode_diff = lambda x: 0
else:
- encoding = get_option("print_config.encoding")
+ encoding = get_option("print.encoding")
def _encode_diff(x):
return len(x) - len(x.decode(encoding))
@@ -159,7 +159,7 @@ def _strlen_func():
if py3compat.PY3: # pragma: no cover
_strlen = len
else:
- encoding = get_option("print_config.encoding")
+ encoding = get_option("print.encoding")
def _strlen(x):
try:
return len(x.decode(encoding))
@@ -191,7 +191,7 @@ def __init__(self, frame, buf=None, columns=None, col_space=None,
self.show_index_names = index_names
if sparsify is None:
- sparsify = get_option("print_config.multi_sparse")
+ sparsify = get_option("print.multi_sparse")
self.sparsify = sparsify
@@ -203,7 +203,7 @@ def __init__(self, frame, buf=None, columns=None, col_space=None,
self.index = index
if justify is None:
- self.justify = get_option("print_config.colheader_justify")
+ self.justify = get_option("print.colheader_justify")
else:
self.justify = justify
@@ -930,13 +930,13 @@ def format_array(values, formatter, float_format=None, na_rep='NaN',
fmt_klass = GenericArrayFormatter
if space is None:
- space = get_option("print_config.column_space")
+ space = get_option("print.column_space")
if float_format is None:
- float_format = get_option("print_config.float_format")
+ float_format = get_option("print.float_format")
if digits is None:
- digits = get_option("print_config.precision")
+ digits = get_option("print.precision")
fmt_obj = fmt_klass(values, digits, na_rep=na_rep,
float_format=float_format,
@@ -964,9 +964,9 @@ def get_result(self):
def _format_strings(self):
if self.float_format is None:
- float_format = get_option("print_config.float_format")
+ float_format = get_option("print.float_format")
if float_format is None:
- fmt_str = '%% .%dg' % get_option("print_config.precision")
+ fmt_str = '%% .%dg' % get_option("print.precision")
float_format = lambda x: fmt_str % x
else:
float_format = self.float_format
@@ -1091,7 +1091,7 @@ def _make_fixed_width(strings, justify='right', minimum=None):
if minimum is not None:
max_len = max(minimum, max_len)
- conf_max = get_option("print_config.max_colwidth")
+ conf_max = get_option("print.max_colwidth")
if conf_max is not None and max_len > conf_max:
max_len = conf_max
@@ -1201,33 +1201,39 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
Default True, "sparsify" MultiIndex display (don't display repeated
elements in outer levels within groups)
"""
+ import warnings
+ warnings.warn("set_printoptions is deprecated, use set_option instead",
+ FutureWarning)
if precision is not None:
- set_option("print_config.precision", precision)
+ set_option("print.precision", precision)
if column_space is not None:
- set_option("print_config.column_space", column_space)
+ set_option("print.column_space", column_space)
if max_rows is not None:
- set_option("print_config.max_rows", max_rows)
+ set_option("print.max_rows", max_rows)
if max_colwidth is not None:
- set_option("print_config.max_colwidth", max_colwidth)
+ set_option("print.max_colwidth", max_colwidth)
if max_columns is not None:
- set_option("print_config.max_columns", max_columns)
+ set_option("print.max_columns", max_columns)
if colheader_justify is not None:
- set_option("print_config.colheader_justify", colheader_justify)
+ set_option("print.colheader_justify", colheader_justify)
if notebook_repr_html is not None:
- set_option("print_config.notebook_repr_html", notebook_repr_html)
+ set_option("print.notebook_repr_html", notebook_repr_html)
if date_dayfirst is not None:
- set_option("print_config.date_dayfirst", date_dayfirst)
+ set_option("print.date_dayfirst", date_dayfirst)
if date_yearfirst is not None:
- set_option("print_config.date_yearfirst", date_yearfirst)
+ set_option("print.date_yearfirst", date_yearfirst)
if pprint_nest_depth is not None:
- set_option("print_config.pprint_nest_depth", pprint_nest_depth)
+ set_option("print.pprint_nest_depth", pprint_nest_depth)
if multi_sparse is not None:
- set_option("print_config.multi_sparse", multi_sparse)
+ set_option("print.multi_sparse", multi_sparse)
if encoding is not None:
- set_option("print_config.encoding", encoding)
+ set_option("print.encoding", encoding)
def reset_printoptions():
- reset_option("^print_config\.")
+ import warnings
+ warnings.warn("reset_printoptions is deprecated, use reset_option instead",
+ FutureWarning)
+ reset_option("^print\.")
def detect_console_encoding():
"""
@@ -1359,8 +1365,8 @@ def set_eng_float_format(precision=None, accuracy=3, use_eng_prefix=False):
"being renamed to 'accuracy'", FutureWarning)
accuracy = precision
- set_option("print_config.float_format", EngFormatter(accuracy, use_eng_prefix))
- set_option("print_config.column_space", max(12, accuracy + 9))
+ set_option("print.float_format", EngFormatter(accuracy, use_eng_prefix))
+ set_option("print.column_space", max(12, accuracy + 9))
def _put_lines(buf, lines):
if any(isinstance(x, unicode) for x in lines):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d9f2807937056..ba9431cdbafde 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -589,9 +589,9 @@ def _need_info_repr_(self):
terminal_width, terminal_height = 100, 100
else:
terminal_width, terminal_height = get_terminal_size()
- max_rows = (terminal_height if get_option("print_config.max_rows") == 0
- else get_option("print_config.max_rows"))
- max_columns = get_option("print_config.max_columns")
+ max_rows = (terminal_height if get_option("print.max_rows") == 0
+ else get_option("print.max_rows"))
+ max_columns = get_option("print.max_columns")
if max_columns > 0:
if len(self.index) <= max_rows and \
@@ -669,7 +669,7 @@ def _repr_html_(self):
if com.in_qtconsole():
raise ValueError('Disable HTML output in QtConsole')
- if get_option("print_config.notebook_repr_html"):
+ if get_option("print.notebook_repr_html"):
if self._need_info_repr_():
return None
else:
diff --git a/pandas/core/index.py b/pandas/core/index.py
index ba7817046fb3a..9f0992908a3a7 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1583,7 +1583,7 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
result_levels.append(level)
if sparsify is None:
- sparsify = get_option("print_config.multi_sparse")
+ sparsify = get_option("print.multi_sparse")
if sparsify:
# little bit of a kludge job for #1217
diff --git a/pandas/core/series.py b/pandas/core/series.py
index ae21014c2d9a7..cd257c849aad2 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -948,8 +948,8 @@ def __unicode__(self):
Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
"""
width, height = get_terminal_size()
- max_rows = (height if get_option("print_config.max_rows") == 0
- else get_option("print_config.max_rows"))
+ max_rows = (height if get_option("print.max_rows") == 0
+ else get_option("print.max_rows"))
if len(self.index) > (max_rows or 1000):
result = self._tidy_repr(min(30, max_rows - 4))
elif len(self.index) > 0:
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 6b91fa1e78a1c..84459e1e9811a 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -13,19 +13,19 @@ def __init__(self,*args):
from copy import deepcopy
self.cf = pd.core.config
- self.gc=deepcopy(getattr(self.cf, '__global_config'))
- self.do=deepcopy(getattr(self.cf, '__deprecated_options'))
- self.ro=deepcopy(getattr(self.cf, '__registered_options'))
+ self.gc=deepcopy(getattr(self.cf, '_global_config'))
+ self.do=deepcopy(getattr(self.cf, '_deprecated_options'))
+ self.ro=deepcopy(getattr(self.cf, '_registered_options'))
def setUp(self):
- setattr(self.cf, '__global_config', {})
- setattr(self.cf, '__deprecated_options', {})
- setattr(self.cf, '__registered_options', {})
+ setattr(self.cf, '_global_config', {})
+ setattr(self.cf, '_deprecated_options', {})
+ setattr(self.cf, '_registered_options', {})
def tearDown(self):
- setattr(self.cf, '__global_config',self.gc)
- setattr(self.cf, '__deprecated_options', self.do)
- setattr(self.cf, '__registered_options', self.ro)
+ setattr(self.cf, '_global_config',self.gc)
+ setattr(self.cf, '_deprecated_options', self.do)
+ setattr(self.cf, '_registered_options', self.ro)
def test_api(self):
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 1379615e9869b..c4e483fd57715 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -65,7 +65,7 @@ def test_repr_tuples(self):
def test_repr_truncation(self):
max_len = 20
- set_option("print_config.max_colwidth", max_len)
+ set_option("print.max_colwidth", max_len)
df = DataFrame({'A': np.random.randn(10),
'B': [tm.rands(np.random.randint(max_len - 1,
max_len + 1)) for i in range(10)]})
@@ -80,10 +80,10 @@ def test_repr_truncation(self):
else:
self.assert_('...' not in line)
- set_option("print_config.max_colwidth", 999999)
+ set_option("print.max_colwidth", 999999)
self.assert_('...' not in repr(df))
- set_option("print_config.max_colwidth", max_len + 2)
+ set_option("print.max_colwidth", max_len + 2)
self.assert_('...' not in repr(df))
def test_repr_should_return_str (self):
@@ -453,7 +453,7 @@ def test_to_string_float_formatting(self):
assert(df_s == expected)
fmt.reset_printoptions()
- self.assertEqual(get_option("print_config.precision"), 7)
+ self.assertEqual(get_option("print.precision"), 7)
df = DataFrame({'x': [1e9, 0.2512]})
df_s = df.to_string()
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index f6ce7229286e8..7010c3c6bbebf 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -222,9 +222,9 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
return mresult
if dayfirst is None:
- dayfirst = get_option("print_config.date_dayfirst")
+ dayfirst = get_option("print.date_dayfirst")
if yearfirst is None:
- yearfirst = get_option("print_config.date_yearfirst")
+ yearfirst = get_option("print.date_yearfirst")
try:
parsed = parse(arg, dayfirst=dayfirst, yearfirst=yearfirst)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2393 | 2012-11-29T18:11:43Z | 2012-12-01T20:54:27Z | 2012-12-01T20:54:27Z | 2014-06-25T07:52:10Z | |
BUG: problems with qtconsole and _repr_html | diff --git a/pandas/core/common.py b/pandas/core/common.py
index c86ee34f26d47..2607c9ad62456 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1096,6 +1096,17 @@ def in_interactive_session():
import __main__ as main
return not hasattr(main, '__file__')
+def in_qtconsole():
+ """
+ check if we're inside an IPython qtconsole
+ """
+ try:
+ ip = get_ipython()
+ if ip.config['KernelApp']['parent_appname'] == 'ipython-qtconsole':
+ return True
+ except:
+ return False
+
# Unicode consolidation
# ---------------------
#
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 3f555c800d1eb..dca864cfe3515 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -582,7 +582,10 @@ def _need_info_repr_(self):
particular DataFrame.
"""
- terminal_width, terminal_height = get_terminal_size()
+ if com.in_qtconsole():
+ terminal_width, terminal_height = 10, 100
+ else:
+ terminal_width, terminal_height = get_terminal_size()
max_rows = (terminal_height if get_option("print_config.max_rows") == 0
else get_option("print_config.max_rows"))
max_columns = get_option("print_config.max_columns")
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 4b1978a9b8449..cb583f202fcf8 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -758,6 +758,21 @@ def test_repr_html(self):
fmt.reset_printoptions()
+ def test_fake_qtconsole_repr_html(self):
+ def get_ipython():
+ return {'config' :
+ {'KernelApp' :
+ {'parent_appname' : 'ipython-qtconsole'}}}
+
+ repstr = self.frame._repr_html_()
+ self.assert_(repstr is not None)
+
+ fmt.set_printoptions(max_rows=5, max_columns=2)
+
+ self.assert_(self.frame._repr_html_() is None)
+
+ fmt.reset_printoptions()
+
def test_to_html_with_classes(self):
df = pandas.DataFrame()
result = df.to_html(classes="sortable draggable")
| works locally but how to put the qtconsole check in a test suite? #2275
| https://api.github.com/repos/pandas-dev/pandas/pulls/2392 | 2012-11-29T17:30:38Z | 2012-11-30T23:30:50Z | null | 2014-06-29T08:04:31Z |
Dti toperiod tz | diff --git a/pandas/src/datetime.pxd b/pandas/src/datetime.pxd
index 13fc7768c4b7f..09073dcd4d3b3 100644
--- a/pandas/src/datetime.pxd
+++ b/pandas/src/datetime.pxd
@@ -1,4 +1,4 @@
-from numpy cimport int64_t, int32_t, npy_int64, npy_int32
+from numpy cimport int64_t, int32_t, npy_int64, npy_int32, ndarray
from cpython cimport PyObject
from cpython cimport PyUnicode_Check, PyUnicode_AsASCIIString
diff --git a/pandas/src/datetime.pyx b/pandas/src/datetime.pyx
index 44660cd3bb682..ed956149280d7 100644
--- a/pandas/src/datetime.pyx
+++ b/pandas/src/datetime.pyx
@@ -5,6 +5,7 @@ import numpy as np
from numpy cimport int32_t, int64_t, import_array, ndarray
from cpython cimport *
+from libc.stdlib cimport free
# this is our datetime.pxd
from datetime cimport *
from util cimport is_integer_object, is_datetime64_object
@@ -1603,3 +1604,401 @@ cpdef normalize_date(object dt):
return datetime(dt.year, dt.month, dt.day)
else:
raise TypeError('Unrecognized type: %s' % type(dt))
+
+cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
+ int freq, object tz):
+ cdef:
+ Py_ssize_t n = len(stamps)
+ ndarray[int64_t] result = np.empty(n, dtype=np.int64)
+ ndarray[int64_t] trans, deltas, pos
+ pandas_datetimestruct dts
+
+ if not have_pytz:
+ raise Exception('Could not find pytz module')
+
+ if _is_utc(tz):
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ result[i] = NPY_NAT
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
+ result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
+ dts.hour, dts.min, dts.sec, freq)
+
+ elif _is_tzlocal(tz):
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ result[i] = NPY_NAT
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns,
+ &dts)
+ dt = datetime(dts.year, dts.month, dts.day, dts.hour,
+ dts.min, dts.sec, dts.us, tz)
+ delta = int(total_seconds(_get_utcoffset(tz, dt))) * 1000000000
+ pandas_datetime_to_datetimestruct(stamps[i] + delta,
+ PANDAS_FR_ns, &dts)
+ result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
+ dts.hour, dts.min, dts.sec, freq)
+ else:
+ # Adjust datetime64 timestamp, recompute datetimestruct
+ trans = _get_transitions(tz)
+ deltas = _get_deltas(tz)
+ _pos = trans.searchsorted(stamps, side='right') - 1
+ if _pos.dtype != np.int64:
+ _pos = _pos.astype(np.int64)
+ pos = _pos
+
+ # statictzinfo
+ if not hasattr(tz, '_transition_info'):
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ result[i] = NPY_NAT
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i] + deltas[0],
+ PANDAS_FR_ns, &dts)
+ result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
+ dts.hour, dts.min, dts.sec, freq)
+ else:
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ result[i] = NPY_NAT
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i] + deltas[pos[i]],
+ PANDAS_FR_ns, &dts)
+ result[i] = get_period_ordinal(dts.year, dts.month, dts.day,
+ dts.hour, dts.min, dts.sec, freq)
+
+ return result
+
+
+cdef extern from "period.h":
+ ctypedef struct date_info:
+ int64_t absdate
+ double abstime
+ double second
+ int minute
+ int hour
+ int day
+ int month
+ int quarter
+ int year
+ int day_of_week
+ int day_of_year
+ int calendar
+
+ ctypedef struct asfreq_info:
+ int from_week_end
+ int to_week_end
+
+ int from_a_year_end
+ int to_a_year_end
+
+ int from_q_year_end
+ int to_q_year_end
+
+ ctypedef int64_t (*freq_conv_func)(int64_t, char, asfreq_info*)
+
+ int64_t asfreq(int64_t dtordinal, int freq1, int freq2, char relation) except INT32_MIN
+ freq_conv_func get_asfreq_func(int fromFreq, int toFreq)
+ void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info)
+
+ int64_t get_period_ordinal(int year, int month, int day,
+ int hour, int minute, int second,
+ int freq) except INT32_MIN
+
+ int64_t get_python_ordinal(int64_t period_ordinal, int freq) except INT32_MIN
+
+ int get_date_info(int64_t ordinal, int freq, date_info *dinfo) except INT32_MIN
+ double getAbsTime(int, int64_t, int64_t)
+
+ int pyear(int64_t ordinal, int freq) except INT32_MIN
+ int pqyear(int64_t ordinal, int freq) except INT32_MIN
+ int pquarter(int64_t ordinal, int freq) except INT32_MIN
+ int pmonth(int64_t ordinal, int freq) except INT32_MIN
+ int pday(int64_t ordinal, int freq) except INT32_MIN
+ int pweekday(int64_t ordinal, int freq) except INT32_MIN
+ int pday_of_week(int64_t ordinal, int freq) except INT32_MIN
+ int pday_of_year(int64_t ordinal, int freq) except INT32_MIN
+ int pweek(int64_t ordinal, int freq) except INT32_MIN
+ int phour(int64_t ordinal, int freq) except INT32_MIN
+ int pminute(int64_t ordinal, int freq) except INT32_MIN
+ int psecond(int64_t ordinal, int freq) except INT32_MIN
+ char *c_strftime(date_info *dinfo, char *fmt)
+ int get_yq(int64_t ordinal, int freq, int *quarter, int *year)
+
+# Period logic
+#----------------------------------------------------------------------
+
+cdef inline int64_t apply_mult(int64_t period_ord, int64_t mult):
+ """
+ Get freq+multiple ordinal value from corresponding freq-only ordinal value.
+ For example, 5min ordinal will be 1/5th the 1min ordinal (rounding down to
+ integer).
+ """
+ if mult == 1:
+ return period_ord
+
+ return (period_ord - 1) // mult
+
+cdef inline int64_t remove_mult(int64_t period_ord_w_mult, int64_t mult):
+ """
+ Get freq-only ordinal value from corresponding freq+multiple ordinal.
+ """
+ if mult == 1:
+ return period_ord_w_mult
+
+ return period_ord_w_mult * mult + 1;
+
+def dt64arr_to_periodarr(ndarray[int64_t] dtarr, int freq, tz=None):
+ """
+ Convert array of datetime64 values (passed in as 'i8' dtype) to a set of
+ periods corresponding to desired frequency, per period convention.
+ """
+ cdef:
+ ndarray[int64_t] out
+ Py_ssize_t i, l
+ pandas_datetimestruct dts
+
+ l = len(dtarr)
+
+ out = np.empty(l, dtype='i8')
+
+ if tz is None:
+ for i in range(l):
+ pandas_datetime_to_datetimestruct(dtarr[i], PANDAS_FR_ns, &dts)
+ out[i] = get_period_ordinal(dts.year, dts.month, dts.day,
+ dts.hour, dts.min, dts.sec, freq)
+ else:
+ out = localize_dt64arr_to_period(dtarr, freq, tz)
+ return out
+
+def periodarr_to_dt64arr(ndarray[int64_t] periodarr, int freq):
+ """
+ Convert array to datetime64 values from a set of ordinals corresponding to
+ periods per period convention.
+ """
+ cdef:
+ ndarray[int64_t] out
+ Py_ssize_t i, l
+
+ l = len(periodarr)
+
+ out = np.empty(l, dtype='i8')
+
+ for i in range(l):
+ out[i] = period_ordinal_to_dt64(periodarr[i], freq)
+
+ return out
+
+cdef char START = 'S'
+cdef char END = 'E'
+
+cpdef int64_t period_asfreq(int64_t period_ordinal, int freq1, int freq2,
+ bint end):
+ """
+ Convert period ordinal from one frequency to another, and if upsampling,
+ choose to use start ('S') or end ('E') of period.
+ """
+ cdef:
+ int64_t retval
+
+ if end:
+ retval = asfreq(period_ordinal, freq1, freq2, END)
+ else:
+ retval = asfreq(period_ordinal, freq1, freq2, START)
+
+ if retval == INT32_MIN:
+ raise ValueError('Frequency conversion failed')
+
+ return retval
+
+def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end):
+ """
+ Convert int64-array of period ordinals from one frequency to another, and
+ if upsampling, choose to use start ('S') or end ('E') of period.
+ """
+ cdef:
+ ndarray[int64_t] result
+ Py_ssize_t i, n
+ freq_conv_func func
+ asfreq_info finfo
+ int64_t val, ordinal
+ char relation
+
+ n = len(arr)
+ result = np.empty(n, dtype=np.int64)
+
+ func = get_asfreq_func(freq1, freq2)
+ get_asfreq_info(freq1, freq2, &finfo)
+
+ if end:
+ relation = END
+ else:
+ relation = START
+
+ for i in range(n):
+ val = func(arr[i], relation, &finfo)
+ if val == INT32_MIN:
+ raise ValueError("Unable to convert to desired frequency.")
+ result[i] = val
+
+ return result
+
+def period_ordinal(int y, int m, int d, int h, int min, int s, int freq):
+ cdef:
+ int64_t ordinal
+
+ return get_period_ordinal(y, m, d, h, min, s, freq)
+
+
+cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq):
+ cdef:
+ pandas_datetimestruct dts
+ date_info dinfo
+
+ get_date_info(ordinal, freq, &dinfo)
+
+ dts.year = dinfo.year
+ dts.month = dinfo.month
+ dts.day = dinfo.day
+ dts.hour = dinfo.hour
+ dts.min = dinfo.minute
+ dts.sec = int(dinfo.second)
+ dts.us = dts.ps = 0
+
+ return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
+
+def period_format(int64_t value, int freq, object fmt=None):
+ cdef:
+ int freq_group
+
+ if fmt is None:
+ freq_group = (freq // 1000) * 1000
+ if freq_group == 1000: # FR_ANN
+ fmt = b'%Y'
+ elif freq_group == 2000: # FR_QTR
+ fmt = b'%FQ%q'
+ elif freq_group == 3000: # FR_MTH
+ fmt = b'%Y-%m'
+ elif freq_group == 4000: # WK
+ left = period_asfreq(value, freq, 6000, 0)
+ right = period_asfreq(value, freq, 6000, 1)
+ return '%s/%s' % (period_format(left, 6000),
+ period_format(right, 6000))
+ elif (freq_group == 5000 # BUS
+ or freq_group == 6000): # DAY
+ fmt = b'%Y-%m-%d'
+ elif freq_group == 7000: # HR
+ fmt = b'%Y-%m-%d %H:00'
+ elif freq_group == 8000: # MIN
+ fmt = b'%Y-%m-%d %H:%M'
+ elif freq_group == 9000: # SEC
+ fmt = b'%Y-%m-%d %H:%M:%S'
+ else:
+ raise ValueError('Unknown freq: %d' % freq)
+
+ return _period_strftime(value, freq, fmt)
+
+
+cdef list extra_fmts = [(b"%q", b"^`AB`^"),
+ (b"%f", b"^`CD`^"),
+ (b"%F", b"^`EF`^")]
+
+cdef list str_extra_fmts = ["^`AB`^", "^`CD`^", "^`EF`^"]
+
+cdef _period_strftime(int64_t value, int freq, object fmt):
+ cdef:
+ Py_ssize_t i
+ date_info dinfo
+ char *formatted
+ object pat, repl, result
+ list found_pat = [False] * len(extra_fmts)
+ int year, quarter
+
+ if PyUnicode_Check(fmt):
+ fmt = fmt.encode('utf-8')
+
+ get_date_info(value, freq, &dinfo)
+ for i in range(len(extra_fmts)):
+ pat = extra_fmts[i][0]
+ repl = extra_fmts[i][1]
+ if pat in fmt:
+ fmt = fmt.replace(pat, repl)
+ found_pat[i] = True
+
+ formatted = c_strftime(&dinfo, <char*> fmt)
+
+ result = util.char_to_string(formatted)
+ free(formatted)
+
+ for i in range(len(extra_fmts)):
+ if found_pat[i]:
+ if get_yq(value, freq, &quarter, &year) < 0:
+ raise ValueError('Unable to get quarter and year')
+
+ if i == 0:
+ repl = '%d' % quarter
+ elif i == 1: # %f, 2-digit year
+ repl = '%.2d' % (year % 100)
+ elif i == 2:
+ repl = '%d' % year
+
+ result = result.replace(str_extra_fmts[i], repl)
+
+ # Py3?
+ if not PyString_Check(result):
+ result = str(result)
+
+ return result
+
+# period accessors
+
+ctypedef int (*accessor)(int64_t ordinal, int freq) except INT32_MIN
+
+def get_period_field(int code, int64_t value, int freq):
+ cdef accessor f = _get_accessor_func(code)
+ return f(value, freq)
+
+def get_period_field_arr(int code, ndarray[int64_t] arr, int freq):
+ cdef:
+ Py_ssize_t i, sz
+ ndarray[int64_t] out
+ accessor f
+
+ f = _get_accessor_func(code)
+
+ sz = len(arr)
+ out = np.empty(sz, dtype=np.int64)
+
+ for i in range(sz):
+ out[i] = f(arr[i], freq)
+
+ return out
+
+
+
+cdef accessor _get_accessor_func(int code):
+ if code == 0:
+ return &pyear
+ elif code == 1:
+ return &pqyear
+ elif code == 2:
+ return &pquarter
+ elif code == 3:
+ return &pmonth
+ elif code == 4:
+ return &pday
+ elif code == 5:
+ return &phour
+ elif code == 6:
+ return &pminute
+ elif code == 7:
+ return &psecond
+ elif code == 8:
+ return &pweek
+ elif code == 9:
+ return &pday_of_year
+ elif code == 10:
+ return &pweekday
+ else:
+ raise ValueError('Unrecognized code: %s' % code)
diff --git a/pandas/src/plib.pyx b/pandas/src/plib.pyx
deleted file mode 100644
index a6fe1b034bb5f..0000000000000
--- a/pandas/src/plib.pyx
+++ /dev/null
@@ -1,356 +0,0 @@
-# cython: profile=False
-
-cimport numpy as np
-import numpy as np
-
-from numpy cimport int32_t, int64_t, import_array, ndarray
-from cpython cimport *
-
-from libc.stdlib cimport free
-
-# this is our datetime.pxd
-from datetime cimport *
-from util cimport is_integer_object, is_datetime64_object
-
-from datetime import timedelta
-from dateutil.parser import parse as parse_date
-cimport util
-
-import cython
-
-# initialize numpy
-import_array()
-
-# import datetime C API
-PyDateTime_IMPORT
-
-
-cdef extern from "period.h":
- ctypedef struct date_info:
- int64_t absdate
- double abstime
- double second
- int minute
- int hour
- int day
- int month
- int quarter
- int year
- int day_of_week
- int day_of_year
- int calendar
-
- ctypedef struct asfreq_info:
- int from_week_end
- int to_week_end
-
- int from_a_year_end
- int to_a_year_end
-
- int from_q_year_end
- int to_q_year_end
-
- ctypedef int64_t (*freq_conv_func)(int64_t, char, asfreq_info*)
-
- int64_t asfreq(int64_t dtordinal, int freq1, int freq2, char relation) except INT32_MIN
- freq_conv_func get_asfreq_func(int fromFreq, int toFreq)
- void get_asfreq_info(int fromFreq, int toFreq, asfreq_info *af_info)
-
- int64_t get_period_ordinal(int year, int month, int day,
- int hour, int minute, int second,
- int freq) except INT32_MIN
-
- int64_t get_python_ordinal(int64_t period_ordinal, int freq) except INT32_MIN
-
- int get_date_info(int64_t ordinal, int freq, date_info *dinfo) except INT32_MIN
- double getAbsTime(int, int64_t, int64_t)
-
- int pyear(int64_t ordinal, int freq) except INT32_MIN
- int pqyear(int64_t ordinal, int freq) except INT32_MIN
- int pquarter(int64_t ordinal, int freq) except INT32_MIN
- int pmonth(int64_t ordinal, int freq) except INT32_MIN
- int pday(int64_t ordinal, int freq) except INT32_MIN
- int pweekday(int64_t ordinal, int freq) except INT32_MIN
- int pday_of_week(int64_t ordinal, int freq) except INT32_MIN
- int pday_of_year(int64_t ordinal, int freq) except INT32_MIN
- int pweek(int64_t ordinal, int freq) except INT32_MIN
- int phour(int64_t ordinal, int freq) except INT32_MIN
- int pminute(int64_t ordinal, int freq) except INT32_MIN
- int psecond(int64_t ordinal, int freq) except INT32_MIN
- char *c_strftime(date_info *dinfo, char *fmt)
- int get_yq(int64_t ordinal, int freq, int *quarter, int *year)
-
-# Period logic
-#----------------------------------------------------------------------
-
-cdef inline int64_t apply_mult(int64_t period_ord, int64_t mult):
- """
- Get freq+multiple ordinal value from corresponding freq-only ordinal value.
- For example, 5min ordinal will be 1/5th the 1min ordinal (rounding down to
- integer).
- """
- if mult == 1:
- return period_ord
-
- return (period_ord - 1) // mult
-
-cdef inline int64_t remove_mult(int64_t period_ord_w_mult, int64_t mult):
- """
- Get freq-only ordinal value from corresponding freq+multiple ordinal.
- """
- if mult == 1:
- return period_ord_w_mult
-
- return period_ord_w_mult * mult + 1;
-
-def dt64arr_to_periodarr(ndarray[int64_t] dtarr, int freq):
- """
- Convert array of datetime64 values (passed in as 'i8' dtype) to a set of
- periods corresponding to desired frequency, per period convention.
- """
- cdef:
- ndarray[int64_t] out
- Py_ssize_t i, l
- pandas_datetimestruct dts
-
- l = len(dtarr)
-
- out = np.empty(l, dtype='i8')
-
- for i in range(l):
- pandas_datetime_to_datetimestruct(dtarr[i], PANDAS_FR_ns, &dts)
- out[i] = get_period_ordinal(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, freq)
- return out
-
-def periodarr_to_dt64arr(ndarray[int64_t] periodarr, int freq):
- """
- Convert array to datetime64 values from a set of ordinals corresponding to
- periods per period convention.
- """
- cdef:
- ndarray[int64_t] out
- Py_ssize_t i, l
-
- l = len(periodarr)
-
- out = np.empty(l, dtype='i8')
-
- for i in range(l):
- out[i] = period_ordinal_to_dt64(periodarr[i], freq)
-
- return out
-
-cdef char START = 'S'
-cdef char END = 'E'
-
-cpdef int64_t period_asfreq(int64_t period_ordinal, int freq1, int freq2,
- bint end):
- """
- Convert period ordinal from one frequency to another, and if upsampling,
- choose to use start ('S') or end ('E') of period.
- """
- cdef:
- int64_t retval
-
- if end:
- retval = asfreq(period_ordinal, freq1, freq2, END)
- else:
- retval = asfreq(period_ordinal, freq1, freq2, START)
-
- if retval == INT32_MIN:
- raise ValueError('Frequency conversion failed')
-
- return retval
-
-def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end):
- """
- Convert int64-array of period ordinals from one frequency to another, and
- if upsampling, choose to use start ('S') or end ('E') of period.
- """
- cdef:
- ndarray[int64_t] result
- Py_ssize_t i, n
- freq_conv_func func
- asfreq_info finfo
- int64_t val, ordinal
- char relation
-
- n = len(arr)
- result = np.empty(n, dtype=np.int64)
-
- func = get_asfreq_func(freq1, freq2)
- get_asfreq_info(freq1, freq2, &finfo)
-
- if end:
- relation = END
- else:
- relation = START
-
- for i in range(n):
- val = func(arr[i], relation, &finfo)
- if val == INT32_MIN:
- raise ValueError("Unable to convert to desired frequency.")
- result[i] = val
-
- return result
-
-def period_ordinal(int y, int m, int d, int h, int min, int s, int freq):
- cdef:
- int64_t ordinal
-
- return get_period_ordinal(y, m, d, h, min, s, freq)
-
-
-cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq):
- cdef:
- pandas_datetimestruct dts
- date_info dinfo
-
- get_date_info(ordinal, freq, &dinfo)
-
- dts.year = dinfo.year
- dts.month = dinfo.month
- dts.day = dinfo.day
- dts.hour = dinfo.hour
- dts.min = dinfo.minute
- dts.sec = int(dinfo.second)
- dts.us = dts.ps = 0
-
- return pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
-
-def period_format(int64_t value, int freq, object fmt=None):
- cdef:
- int freq_group
-
- if fmt is None:
- freq_group = (freq // 1000) * 1000
- if freq_group == 1000: # FR_ANN
- fmt = b'%Y'
- elif freq_group == 2000: # FR_QTR
- fmt = b'%FQ%q'
- elif freq_group == 3000: # FR_MTH
- fmt = b'%Y-%m'
- elif freq_group == 4000: # WK
- left = period_asfreq(value, freq, 6000, 0)
- right = period_asfreq(value, freq, 6000, 1)
- return '%s/%s' % (period_format(left, 6000),
- period_format(right, 6000))
- elif (freq_group == 5000 # BUS
- or freq_group == 6000): # DAY
- fmt = b'%Y-%m-%d'
- elif freq_group == 7000: # HR
- fmt = b'%Y-%m-%d %H:00'
- elif freq_group == 8000: # MIN
- fmt = b'%Y-%m-%d %H:%M'
- elif freq_group == 9000: # SEC
- fmt = b'%Y-%m-%d %H:%M:%S'
- else:
- raise ValueError('Unknown freq: %d' % freq)
-
- return _period_strftime(value, freq, fmt)
-
-
-cdef list extra_fmts = [(b"%q", b"^`AB`^"),
- (b"%f", b"^`CD`^"),
- (b"%F", b"^`EF`^")]
-
-cdef list str_extra_fmts = ["^`AB`^", "^`CD`^", "^`EF`^"]
-
-cdef _period_strftime(int64_t value, int freq, object fmt):
- cdef:
- Py_ssize_t i
- date_info dinfo
- char *formatted
- object pat, repl, result
- list found_pat = [False] * len(extra_fmts)
- int year, quarter
-
- if PyUnicode_Check(fmt):
- fmt = fmt.encode('utf-8')
-
- get_date_info(value, freq, &dinfo)
- for i in range(len(extra_fmts)):
- pat = extra_fmts[i][0]
- repl = extra_fmts[i][1]
- if pat in fmt:
- fmt = fmt.replace(pat, repl)
- found_pat[i] = True
-
- formatted = c_strftime(&dinfo, <char*> fmt)
-
- result = util.char_to_string(formatted)
- free(formatted)
-
- for i in range(len(extra_fmts)):
- if found_pat[i]:
- if get_yq(value, freq, &quarter, &year) < 0:
- raise ValueError('Unable to get quarter and year')
-
- if i == 0:
- repl = '%d' % quarter
- elif i == 1: # %f, 2-digit year
- repl = '%.2d' % (year % 100)
- elif i == 2:
- repl = '%d' % year
-
- result = result.replace(str_extra_fmts[i], repl)
-
- # Py3?
- if not PyString_Check(result):
- result = str(result)
-
- return result
-
-# period accessors
-
-ctypedef int (*accessor)(int64_t ordinal, int freq) except INT32_MIN
-
-def get_period_field(int code, int64_t value, int freq):
- cdef accessor f = _get_accessor_func(code)
- return f(value, freq)
-
-def get_period_field_arr(int code, ndarray[int64_t] arr, int freq):
- cdef:
- Py_ssize_t i, sz
- ndarray[int64_t] out
- accessor f
-
- f = _get_accessor_func(code)
-
- sz = len(arr)
- out = np.empty(sz, dtype=np.int64)
-
- for i in range(sz):
- out[i] = f(arr[i], freq)
-
- return out
-
-
-
-cdef accessor _get_accessor_func(int code):
- if code == 0:
- return &pyear
- elif code == 1:
- return &pqyear
- elif code == 2:
- return &pquarter
- elif code == 3:
- return &pmonth
- elif code == 4:
- return &pday
- elif code == 5:
- return &phour
- elif code == 6:
- return &pminute
- elif code == 7:
- return &psecond
- elif code == 8:
- return &pweek
- elif code == 9:
- return &pday_of_year
- elif code == 10:
- return &pweekday
- else:
- raise ValueError('Unrecognized code: %s' % code)
-
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 92aeb1faf0ef5..789232359749b 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -702,7 +702,7 @@ def to_period(self, freq=None):
if freq is None:
freq = get_period_alias(self.freqstr)
- return PeriodIndex(self.values, freq=freq)
+ return PeriodIndex(self.values, freq=freq, tz=self.tz)
def order(self, return_indexer=False, ascending=True):
"""
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 85b3654bac70a..02748b3c68cf6 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -15,7 +15,6 @@
from pandas.lib import Timestamp
import pandas.lib as lib
-import pandas._period as plib
import pandas._algos as _algos
@@ -25,7 +24,7 @@
def _period_field_accessor(name, alias):
def f(self):
base, mult = _gfc(self.freq)
- return plib.get_period_field(alias, self.ordinal, base)
+ return lib.get_period_field(alias, self.ordinal, base)
f.__name__ = name
return property(f)
@@ -33,7 +32,7 @@ def f(self):
def _field_accessor(name, alias):
def f(self):
base, mult = _gfc(self.freq)
- return plib.get_period_field_arr(alias, self.values, base)
+ return lib.get_period_field_arr(alias, self.values, base)
f.__name__ = name
return property(f)
@@ -120,7 +119,7 @@ def __init__(self, value=None, freq=None, ordinal=None,
raise ValueError('Only mult == 1 supported')
if self.ordinal is None:
- self.ordinal = plib.period_ordinal(dt.year, dt.month, dt.day,
+ self.ordinal = lib.period_ordinal(dt.year, dt.month, dt.day,
dt.hour, dt.minute, dt.second,
base)
@@ -175,7 +174,7 @@ def asfreq(self, freq, how='E'):
raise ValueError('Only mult == 1 supported')
end = how == 'E'
- new_ordinal = plib.period_asfreq(self.ordinal, base1, base2, end)
+ new_ordinal = lib.period_asfreq(self.ordinal, base1, base2, end)
return Period(ordinal=new_ordinal, freq=base2)
@@ -215,7 +214,7 @@ def to_timestamp(self, freq=None, how='start'):
base, mult = _gfc(freq)
val = self.asfreq(freq, how)
- dt64 = plib.period_ordinal_to_dt64(val.ordinal, base)
+ dt64 = lib.period_ordinal_to_dt64(val.ordinal, base)
return Timestamp(dt64)
year = _period_field_accessor('year', 0)
@@ -238,13 +237,13 @@ def now(cls, freq=None):
def __repr__(self):
base, mult = _gfc(self.freq)
- formatted = plib.period_format(self.ordinal, base)
+ formatted = lib.period_format(self.ordinal, base)
freqstr = _freq_mod._reverse_period_code_map[base]
return "Period('%s', '%s')" % (formatted, freqstr)
def __str__(self):
base, mult = _gfc(self.freq)
- formatted = plib.period_format(self.ordinal, base)
+ formatted = lib.period_format(self.ordinal, base)
return ("%s" % formatted)
def strftime(self, fmt):
@@ -385,7 +384,7 @@ def strftime(self, fmt):
'Jan. 01, 2001 was a Monday'
"""
base, mult = _gfc(self.freq)
- return plib.period_format(self.ordinal, base, fmt)
+ return lib.period_format(self.ordinal, base, fmt)
def _get_date_and_freq(value, freq):
@@ -421,12 +420,12 @@ def _get_ordinals(data, freq):
return lib.map_infer(data, f)
-def dt64arr_to_periodarr(data, freq):
+def dt64arr_to_periodarr(data, freq, tz):
if data.dtype != np.dtype('M8[ns]'):
raise ValueError('Wrong dtype: %s' % data.dtype)
base, mult = _gfc(freq)
- return plib.dt64arr_to_periodarr(data.view('i8'), base)
+ return lib.dt64arr_to_periodarr(data.view('i8'), base, tz)
# --- Period index sketch
def _period_index_cmp(opname):
@@ -494,6 +493,8 @@ class PeriodIndex(Int64Index):
hour : int or array, default None
minute : int or array, default None
second : int or array, default None
+ tz : object, default None
+ Timezone for converting datetime64 data to Periods
Examples
--------
@@ -514,7 +515,8 @@ def __new__(cls, data=None, ordinal=None,
freq=None, start=None, end=None, periods=None,
copy=False, name=None,
year=None, month=None, quarter=None, day=None,
- hour=None, minute=None, second=None):
+ hour=None, minute=None, second=None,
+ tz=None):
freq = _freq_mod.get_standard_freq(freq)
@@ -531,9 +533,9 @@ def __new__(cls, data=None, ordinal=None,
else:
fields = [year, month, quarter, day, hour, minute, second]
data, freq = cls._generate_range(start, end, periods,
- freq, fields)
+ freq, fields)
else:
- ordinal, freq = cls._from_arraylike(data, freq)
+ ordinal, freq = cls._from_arraylike(data, freq, tz)
data = np.array(ordinal, dtype=np.int64, copy=False)
subarr = data.view(cls)
@@ -562,7 +564,7 @@ def _generate_range(cls, start, end, periods, freq, fields):
return subarr, freq
@classmethod
- def _from_arraylike(cls, data, freq):
+ def _from_arraylike(cls, data, freq, tz):
if not isinstance(data, np.ndarray):
if np.isscalar(data) or isinstance(data, Period):
raise ValueError('PeriodIndex() must be called with a '
@@ -598,7 +600,7 @@ def _from_arraylike(cls, data, freq):
else:
base1, _ = _gfc(data.freq)
base2, _ = _gfc(freq)
- data = plib.period_asfreq_arr(data.values, base1, base2, 1)
+ data = lib.period_asfreq_arr(data.values, base1, base2, 1)
else:
if freq is None and len(data) > 0:
freq = getattr(data[0], 'freq', None)
@@ -608,7 +610,7 @@ def _from_arraylike(cls, data, freq):
'inferred from first element'))
if np.issubdtype(data.dtype, np.datetime64):
- data = dt64arr_to_periodarr(data, freq)
+ data = dt64arr_to_periodarr(data, freq, tz)
elif data.dtype == np.int64:
pass
else:
@@ -719,7 +721,7 @@ def asfreq(self, freq=None, how='E'):
raise ValueError('Only mult == 1 supported')
end = how == 'E'
- new_data = plib.period_asfreq_arr(self.values, base1, base2, end)
+ new_data = lib.period_asfreq_arr(self.values, base1, base2, end)
result = new_data.view(PeriodIndex)
result.name = self.name
@@ -786,7 +788,7 @@ def to_timestamp(self, freq=None, how='start'):
base, mult = _gfc(freq)
new_data = self.asfreq(freq, how)
- new_data = plib.periodarr_to_dt64arr(new_data.values, base)
+ new_data = lib.periodarr_to_dt64arr(new_data.values, base)
return DatetimeIndex(new_data, freq='infer', name=self.name)
def shift(self, n):
@@ -1128,7 +1130,7 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
year, quarter = _make_field_arrays(year, quarter)
for y, q in zip(year, quarter):
y, m = _quarter_to_myear(y, q, freq)
- val = plib.period_ordinal(y, m, 1, 1, 1, 1, base)
+ val = lib.period_ordinal(y, m, 1, 1, 1, 1, base)
ordinals.append(val)
else:
base, mult = _gfc(freq)
@@ -1137,7 +1139,7 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
arrays = _make_field_arrays(year, month, day, hour, minute, second)
for y, mth, d, h, mn, s in zip(*arrays):
- ordinals.append(plib.period_ordinal(y, mth, d, h, mn, s, base))
+ ordinals.append(lib.period_ordinal(y, mth, d, h, mn, s, base))
return np.array(ordinals, dtype=np.int64), freq
@@ -1166,7 +1168,7 @@ def _ordinal_from_fields(year, month, quarter, day, hour, minute,
if quarter is not None:
year, month = _quarter_to_myear(year, quarter, freq)
- return plib.period_ordinal(year, month, day, hour, minute, second, base)
+ return lib.period_ordinal(year, month, day, hour, minute, second, base)
def _quarter_to_myear(year, quarter, freq):
@@ -1219,4 +1221,3 @@ def period_range(start=None, end=None, periods=None, freq='D', name=None):
"""
return PeriodIndex(start=start, end=end, periods=periods,
freq=freq, name=name)
-
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index ef35c44b53772..003e6f301935d 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1067,6 +1067,38 @@ def test_to_period(self):
pts = ts.to_period('M')
self.assert_(pts.index.equals(exp.index.asfreq('M')))
+ def test_to_period_tz(self):
+ _skip_if_no_pytz()
+ from dateutil.tz import tzlocal
+ from pandas.tseries.period import period_range
+ from pytz import utc as UTC
+
+ xp = date_range('1/1/2000', '4/1/2000').to_period()
+
+ ts = date_range('1/1/2000', '4/1/2000', tz='US/Eastern')
+
+ result = ts.to_period()[0]
+ expected = ts[0].to_period()
+
+ self.assert_(result == expected)
+ self.assert_(ts.to_period().equals(xp))
+
+ ts = date_range('1/1/2000', '4/1/2000', tz=UTC)
+
+ result = ts.to_period()[0]
+ expected = ts[0].to_period()
+
+ self.assert_(result == expected)
+ self.assert_(ts.to_period().equals(xp))
+
+ ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal())
+
+ result = ts.to_period()[0]
+ expected = ts[0].to_period()
+
+ self.assert_(result == expected)
+ self.assert_(ts.to_period().equals(xp))
+
def test_frame_to_period(self):
K = 5
from pandas.tseries.period import period_range
@@ -2205,7 +2237,8 @@ def test_series_set_value(self):
def test_slice_locs_indexerror(self):
times = [datetime(2000, 1, 1) + timedelta(minutes=i) for i in range(1000000)]
s = Series(range(1000000), times)
- s.ix[datetime(1900,1,1):datetime(2100,1,1)]
+ s.ix[datetime(1900,1,1)
+:datetime(2100,1,1)]
class TestSeriesDatetime64(unittest.TestCase):
diff --git a/setup.py b/setup.py
index ca152588b9554..07a172570df62 100755
--- a/setup.py
+++ b/setup.py
@@ -551,8 +551,6 @@ def run(self):
'reduce', 'stats', 'datetime',
'hashtable', 'inference', 'properties', 'join', 'engines']
-plib_depends = ['plib']
-
def srcpath(name=None, suffix='.pyx', subdir='src'):
return pjoin('pandas', subdir, name+suffix)
@@ -560,9 +558,6 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
tseries_depends = [srcpath(f, suffix='.pyx')
for f in tseries_depends]
tseries_depends.append('pandas/src/util.pxd')
- plib_depends = [srcpath(f, suffix='.pyx')
- for f in plib_depends]
- plib_depends.append('pandas/src/util.pxd')
else:
tseries_depends = []
plib_depends = []
@@ -577,7 +572,8 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
lib_depends = tseries_depends + ['pandas/src/numpy_helper.h',
'pandas/src/parse_helper.h',
'pandas/src/datetime/np_datetime.h',
- 'pandas/src/datetime/np_datetime_strings.h']
+ 'pandas/src/datetime/np_datetime_strings.h',
+ 'pandas/src/period.h']
# some linux distros require it
libraries = ['m'] if 'win32' not in sys.platform else []
@@ -586,7 +582,8 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
depends=lib_depends,
sources=[srcpath('tseries', suffix=suffix),
'pandas/src/datetime/np_datetime.c',
- 'pandas/src/datetime/np_datetime_strings.c'],
+ 'pandas/src/datetime/np_datetime_strings.c',
+ 'pandas/src/period.c'],
include_dirs=common_include,
# pyrex_gdb=True,
# extra_compile_args=['-Wconversion']
@@ -597,14 +594,6 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
include_dirs=[np.get_include()],
libraries=libraries)
-period_ext = Extension('pandas._period',
- depends=plib_depends + ['pandas/src/numpy_helper.h',
- 'pandas/src/period.h'],
- sources=[srcpath('plib', suffix=suffix),
- 'pandas/src/datetime/np_datetime.c',
- 'pandas/src/period.c'],
- include_dirs=[np.get_include()])
-
parser_ext = Extension('pandas._parser',
depends=['pandas/src/parser/tokenizer.h',
'pandas/src/parser/io.h',
@@ -632,7 +621,6 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
extensions = [algos_ext,
lib_ext,
- period_ext,
sparse_ext,
pytables_ext,
parser_ext]
| https://api.github.com/repos/pandas-dev/pandas/pulls/2390 | 2012-11-29T17:00:10Z | 2012-11-29T21:16:29Z | null | 2012-11-29T21:16:29Z | |
DOC: pytables what's new tweaks. | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 1108f9ca7ef83..5785a014f8aed 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -763,6 +763,7 @@ one can use the ExcelWriter class, as in the following example:
df2.to_excel(writer, sheet_name='sheet2')
writer.save()
+.. _io-hdf5:
HDF5 (PyTables)
---------------
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 401b2c661460f..7bfd895670ef9 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -12,10 +12,7 @@ New features
Updated PyTables Support
~~~~~~~~~~~~~~~~~~~~~~~~
-Docs for PyTables ``Table`` format & several enhancements to the api. Here is a taste of what to expect.
-
-`the full docs for tables
-<https://github.com/pydata/pandas/blob/master/io.html#hdf5-pytables>`__
+:ref:`Docs <io-hdf5>` for PyTables ``Table`` format & several enhancements to the api. Here is a taste of what to expect.
.. ipython:: python
@@ -83,12 +80,12 @@ Docs for PyTables ``Table`` format & several enhancements to the api. Here is a
**Bug Fixes**
- - added ``Term`` method of specifying where conditions, closes GH #1996
+ - added ``Term`` method of specifying where conditions (GH1996_).
- ``del store['df']`` now call ``store.remove('df')`` for store deletion
- deleting of consecutive rows is much faster than before
- ``min_itemsize`` parameter can be specified in table creation to force a minimum size for indexing columns
(the previous implementation would set the column size based on the first append)
- - indexing support via ``create_table_index`` (requires PyTables >= 2.3), close GH #698
+ - indexing support via ``create_table_index`` (requires PyTables >= 2.3) (GH698_).
- appending on a store would fail if the table was not first created via ``put``
- minor change to select and remove: require a table ONLY if where is also provided (and not None)
@@ -135,5 +132,7 @@ See the `full release notes
<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
on GitHub for a complete list.
+.. _GH698: https://github.com/pydata/pandas/issues/698
+.. _GH1996: https://github.com/pydata/pandas/issues/1996
.. _GH2316: https://github.com/pydata/pandas/issues/2316
.. _GH2097: https://github.com/pydata/pandas/issues/2097
| https://api.github.com/repos/pandas-dev/pandas/pulls/2389 | 2012-11-29T16:37:53Z | 2012-11-29T16:38:02Z | 2012-11-29T16:38:02Z | 2012-11-29T16:38:08Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.