title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
minor doc updates for pytablesv4 | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 1108f9ca7ef83..2d7be175ba549 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -854,7 +854,6 @@ after data is already in the table (this may become automatic in the future or a
df2 = df[4:]
store.append('df', df1)
store.append('df', df2)
- store.append('wp', wp)
store
store.select('df')
@@ -865,16 +864,27 @@ after data is already in the table (this may become automatic in the future or a
store.create_table_index('df')
store.handle.root.df.table
+Storing Mixed Types in a Table
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Storing mixed-dtype data is supported. Strings are store as a fixed-width using the maximum size of the appended column. Subsequent appends will truncate strings at this length.
+Passing ``min_itemsize = { column_name : size }`` as a paremeter to append will set a larger minimum for the column. Storing ``floats, strings, ints, bools`` are currently supported.
+
.. ipython:: python
- :suppress:
+
+ df_mixed = df.copy()
+ df_mixed['string'] = 'string'
+ df_mixed['int'] = 1
+ df_mixed['bool'] = True
- store.close()
- import os
- os.remove('store.h5')
+ store.append('df_mixed',df_mixed)
+ df_mixed1 = store.select('df_mixed')
+ df_mixed1
+ df_mixed1.get_dtype_counts()
-Querying objects stored in Table format
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Querying a Table
+~~~~~~~~~~~~~~~~
``select`` and ``delete`` operations have an optional criteria that can be specified to select/delete only
a subset of the data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
@@ -899,32 +909,32 @@ Queries are built up using a list of ``Terms`` (currently only **anding** of ter
.. ipython:: python
- store = HDFStore('store.h5')
store.append('wp',wp)
store.select('wp',[ 'major_axis>20000102', ('minor_axis', '=', ['A','B']) ])
-Delete from objects stored in Table format
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Delete from a Table
+~~~~~~~~~~~~~~~~~~~
.. ipython:: python
store.remove('wp', 'index>20000102' )
store.select('wp')
-.. ipython:: python
- :suppress:
-
- store.close()
- import os
- os.remove('store.h5')
-
Notes & Caveats
~~~~~~~~~~~~~~~
- Selection by items (the top level panel dimension) is not possible; you always get all of the items in the returned Panel
- - ``PyTables`` only supports fixed-width string columns in ``tables``. The sizes of a string based indexing column (e.g. *index* or *minor_axis*) are determined as the maximum size of the elements in that axis or by passing the ``min_itemsize`` on the first table creation. If subsequent appends introduce elements in the indexing axis that are larger than the supported indexer, an Exception will be raised (otherwise you could have a silent truncation of these indexers, leading to loss of information).
- Once a ``table`` is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can be appended
- You can not append/select/delete to a non-table (table creation is determined on the first append, or by passing ``table=True`` in a put operation)
+ - ``PyTables`` only supports fixed-width string columns in ``tables``. The sizes of a string based indexing column (e.g. *column* or *minor_axis*) are determined as the maximum size of the elements in that axis or by passing the parameter ``min_itemsize`` on the first table creation (``min_itemsize`` can be an integer or a dict of column name to an integer). If subsequent appends introduce elements in the indexing axis that are larger than the supported indexer, an Exception will be raised (otherwise you could have a silent truncation of these indexers, leading to loss of information). This is **ONLY** necessary for storing ``Panels`` (as the indexing column is stored directly in a column)
+
+ .. ipython:: python
+
+ store.append('wp_big_strings', wp, min_itemsize = 30)
+ wp = wp.rename_axis(lambda x: x + '_big_strings', axis=2)
+ store.append('wp_big_strings', wp)
+ store.select('wp_big_strings')
+
Performance
~~~~~~~~~~~
@@ -942,3 +952,10 @@ Performance
- ``Tables`` offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
use the pytables utilities ``ptrepack`` to rewrite the file (and also can change compression methods)
- Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 401b2c661460f..2068815b702b6 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -66,7 +66,7 @@ Docs for PyTables ``Table`` format & several enhancements to the api. Here is a
**Enhancements**
- - added multi-dtype support!
+ - added mixed-dtype support!
.. ipython:: python
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 1f8891ae64bef..371d2697cd984 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -890,9 +890,14 @@ def __iter__(self):
return iter(self.values)
def maybe_set_size(self, min_itemsize = None, **kwargs):
- """ maybe set a string col itemsize """
- if self.kind == 'string' and min_itemsize is not None:
- if self.typ.itemsize < min_itemsize:
+ """ maybe set a string col itemsize:
+ min_itemsize can be an interger or a dict with this columns name with an integer size """
+ if self.kind == 'string':
+
+ if isinstance(min_itemsize, dict):
+ min_itemsize = min_itemsize.get(self.name)
+
+ if min_itemsize is not None and self.typ.itemsize < min_itemsize:
self.typ = _tables().StringCol(itemsize = min_itemsize, pos = getattr(self.typ,'pos',None))
def validate_and_set(self, table, append, **kwargs):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 8a21f0a444840..ca2ea2e7089a0 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -12,7 +12,7 @@
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
-from pandas import concat
+from pandas import concat, Timestamp
try:
import tables
@@ -177,9 +177,20 @@ def test_append_with_strings(self):
expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
tm.assert_panel_equal(self.store['s1'], expected)
+ # test dict format
+ self.store.append('s2', wp, min_itemsize = { 'column' : 20 })
+ self.store.append('s2', wp2)
+ expected = concat([ wp, wp2], axis = 2)
+ expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
+ tm.assert_panel_equal(self.store['s2'], expected)
+
+ # apply the wrong field (similar to #1)
+ self.store.append('s3', wp, min_itemsize = { 'index' : 20 })
+ self.assertRaises(Exception, self.store.append, 's3')
+
# test truncation of bigger strings
- self.store.append('s2', wp)
- self.assertRaises(Exception, self.store.append, 's2', wp2)
+ self.store.append('s4', wp)
+ self.assertRaises(Exception, self.store.append, 's4', wp2)
def test_create_table_index(self):
wp = tm.makePanel()
@@ -245,6 +256,7 @@ def _make_one_df():
df['obj2'] = 'bar'
df['bool1'] = df['A'] > 0
df['bool2'] = df['B'] > 0
+ df['bool3'] = True
df['int1'] = 1
df['int2'] = 2
return df.consolidate()
| added a min_itemsize example to the docs
min_itemsize can be passed as a dict
| https://api.github.com/repos/pandas-dev/pandas/pulls/2387 | 2012-11-29T15:02:00Z | 2012-11-29T16:39:30Z | null | 2012-11-29T16:39:30Z |
ENH: line_terminator parameter for DataFrame.to_csv | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 3f555c800d1eb..2d32079cf8cdd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1305,7 +1305,7 @@ def _helper_csvexcel(self, writer, na_rep=None, cols=None,
def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
cols=None, header=True, index=True, index_label=None,
- mode='w', nanRep=None, encoding=None, quoting=None):
+ mode='w', nanRep=None, encoding=None, quoting=None, line_terminator='\n'):
"""
Write DataFrame to a comma-separated values (csv) file
@@ -1336,6 +1336,8 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
encoding : string, optional
a string representing the encoding to use if the contents are
non-ascii, for python versions prior to 3
+ line_terminator: string, default '\n'
+ The newline character or character sequence to use in the output file
"""
if nanRep is not None: # pragma: no cover
import warnings
@@ -1355,11 +1357,11 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
try:
if encoding is not None:
- csvout = com.UnicodeWriter(f, lineterminator='\n',
+ csvout = com.UnicodeWriter(f, lineterminator=line_terminator,
delimiter=sep, encoding=encoding,
quoting=quoting)
else:
- csvout = csv.writer(f, lineterminator='\n', delimiter=sep,
+ csvout = csv.writer(f, lineterminator=line_terminator, delimiter=sep,
quoting=quoting)
self._helper_csvexcel(csvout, na_rep=na_rep,
float_format=float_format, cols=cols,
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 9ac9a5b670b2e..4cc60d36deb7f 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3839,6 +3839,29 @@ def test_to_csv_index_no_leading_comma(self):
'three,3,6\n')
self.assertEqual(buf.getvalue(), expected)
+ def test_to_csv_line_terminators(self):
+ df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]},
+ index=['one', 'two', 'three'])
+
+ buf = StringIO()
+ df.to_csv(buf, line_terminator='\r\n')
+ expected = (',A,B\r\n'
+ 'one,1,4\r\n'
+ 'two,2,5\r\n'
+ 'three,3,6\r\n')
+ self.assertEqual(buf.getvalue(), expected)
+
+ buf = StringIO()
+ df.to_csv(buf) # The default line terminator remains \n
+ expected = (',A,B\n'
+ 'one,1,4\n'
+ 'two,2,5\n'
+ 'three,3,6\n')
+ self.assertEqual(buf.getvalue(), expected)
+
+
+
+
def test_to_excel_from_excel(self):
try:
import xlwt
| In response to #2326
(Am I doing this right?)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2383 | 2012-11-29T05:02:31Z | 2012-11-29T20:24:49Z | null | 2014-06-20T21:46:40Z |
"x in ix" fails for dupe keys in DatetimeIndex | diff --git a/.travis.yml b/.travis.yml
index 33c281b7f6d57..ee60f9d7ad713 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -29,7 +29,7 @@ install:
- 'if [ $TRAVIS_PYTHON_VERSION == "3.3" ]; then pip uninstall numpy; pip install https://github.com/numpy/numpy/archive/v1.7.0b2.tar.gz; fi'
- 'if [ $TRAVIS_PYTHON_VERSION == "3.2" ] || [ $TRAVIS_PYTHON_VERSION == "3.1" ]; then pip install https://github.com/y-p/numpy/archive/1.6.2_with_travis_fix.tar.gz; fi'
- 'if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ]; then pip install numpy; fi' # should be nop if pre-installed
- - pip install --use-mirrors cython nose pytz python-dateutil;
+ - pip install --use-mirrors cython nose pytz python-dateutil xlrd openpyxl;
script:
- python setup.py build_ext install
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 401cc45cbdbd7..ac690c651249b 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1446,7 +1446,6 @@ def test_append_many(self):
assert_series_equal(result, self.ts)
def test_all_any(self):
- np.random.seed(12345)
ts = tm.makeTimeSeries()
bool_series = ts > 0
self.assert_(not bool_series.all())
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 92aeb1faf0ef5..b50ed318ee038 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -552,7 +552,8 @@ def _add_delta(self, delta):
def __contains__(self, key):
try:
- return np.isscalar(self.get_loc(key))
+ res = self.get_loc(key)
+ return np.isscalar(res) or type(res) == slice
except (KeyError, TypeError):
return False
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index ef35c44b53772..7db86a3e257f0 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -59,6 +59,11 @@ def test_index_unique(self):
uniques = self.dups.index.unique()
self.assert_(uniques.dtype == 'M8[ns]') # sanity
+ def test_index_dupes_contains(self):
+ d = datetime(2011, 12, 5, 20, 30)
+ ix=DatetimeIndex([d,d])
+ self.assertTrue(d in ix)
+
def test_duplicate_dates_indexing(self):
ts = self.dups
| https://api.github.com/repos/pandas-dev/pandas/pulls/2380 | 2012-11-29T03:07:24Z | 2012-11-29T20:13:07Z | null | 2014-06-19T19:39:39Z | |
BLD: modify travis to install numpy 1.6.2+fixes on py3 | diff --git a/.travis.yml b/.travis.yml
index 7568ec0763366..33c281b7f6d57 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -5,7 +5,7 @@ python:
- 2.7
- 3.1
- 3.2
- # - 3.3
+ - 3.3
matrix:
include:
@@ -13,20 +13,21 @@ matrix:
env: VBENCH=true
allow_failures:
- # - python: 3.3 # until travis yograde to 1.8.4
+ - python: 3.3 # until travis upgrade to 1.8.4
- python: 2.7
env: VBENCH=true
install:
- virtualenv --version
+ - date
- whoami
- pwd
- echo $VBENCH
# install 1.7.0b2 for 3.3, and pull a version of numpy git master
# with a alternate fix for detach bug as a temporary workaround
# for the others.
- - 'if [ $TRAVIS_PYTHON_VERSION == "3.3" ]; then pip uninstall numpy; pip install http://downloads.sourceforge.net/project/numpy/NumPy/1.7.0b2/numpy-1.7.0b2.tar.gz; fi'
- - 'if [ $TRAVIS_PYTHON_VERSION == "3.2" ] || [ $TRAVIS_PYTHON_VERSION == "3.1" ]; then pip install --use-mirrors git+git://github.com/numpy/numpy.git@089bfa5865cd39e2b40099755e8563d8f0d04f5f#egg=numpy; fi'
+ - 'if [ $TRAVIS_PYTHON_VERSION == "3.3" ]; then pip uninstall numpy; pip install https://github.com/numpy/numpy/archive/v1.7.0b2.tar.gz; fi'
+ - 'if [ $TRAVIS_PYTHON_VERSION == "3.2" ] || [ $TRAVIS_PYTHON_VERSION == "3.1" ]; then pip install https://github.com/y-p/numpy/archive/1.6.2_with_travis_fix.tar.gz; fi'
- 'if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ]; then pip install numpy; fi' # should be nop if pre-installed
- pip install --use-mirrors cython nose pytz python-dateutil;
| 1.6.2 and git master have diverged quite a bit, best to narrow down the chance
of spurious errors..
| https://api.github.com/repos/pandas-dev/pandas/pulls/2378 | 2012-11-28T23:56:22Z | 2012-11-28T23:59:48Z | 2012-11-28T23:59:48Z | 2012-11-29T16:52:08Z |
BUG: py3.3 returns False, not TypeError datetime==datetime+tz #2331 | diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index ef35c44b53772..b60397bf138c6 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -2385,8 +2385,12 @@ def test_cant_compare_tz_naive_w_aware(self):
self.assertRaises(Exception, b.__lt__, a)
self.assertRaises(Exception, b.__gt__, a)
- self.assertRaises(Exception, a.__eq__, b.to_pydatetime())
- self.assertRaises(Exception, a.to_pydatetime().__eq__, b)
+ if sys.version_info < (3,3):
+ self.assertRaises(Exception, a.__eq__, b.to_pydatetime())
+ self.assertRaises(Exception, a.to_pydatetime().__eq__, b)
+ else:
+ self.assertFalse(a == b.to_pydatetime())
+ self.assertFalse(a.to_pydatetime() == b)
def test_delta_preserve_nanos(self):
val = Timestamp(1337299200000000123L)
| http://docs.python.org/3/whatsnew/3.3.html#datetime
This should leave just one error specific to 3.3 from those mentioned in #2331,
and that's parser related.
Don't know if other parts of the package depend on the exception being raised.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2377 | 2012-11-28T23:51:15Z | 2012-11-30T23:48:01Z | null | 2012-11-30T23:48:02Z |
ENH: enhance useability of options API | diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index ff4a4d53ff425..3b365b975754c 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -31,10 +31,11 @@ API changes
- New API functions for working with pandas options (GH2097_):
- - ``get_option`` / ``set_option`` - get/set the value of an option.
- - ``reset_option`` / ``reset_options`` - reset an options / all options to their default value.
- - ``describe_options`` - print a description of one or more option. When called with no arguments. print all registered options.
- - ``set_printoptions`` is now deprecated (but functioning), the print options now live under "print_config.XYZ". For example:
+ - ``get_option`` / ``set_option`` - get/set the value of an option. Partial names are accepted.
+ - ``reset_option`` - reset one or more options to their default value. Partial names are accepted.
+ - ``describe_option`` - print a description of one or more options. When called with no arguments. print all registered options.
+
+ Note: ``set_printoptions`` is now deprecated (but functioning), the print options now live under "print_config.XYZ". For example:
.. ipython:: python
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 469f3683113ec..3da591f7a9d9e 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -31,4 +31,4 @@
import pandas.core.datetools as datetools
from pandas.core.config import get_option,set_option,reset_option,\
- reset_options,describe_options
+ describe_option
diff --git a/pandas/core/config.py b/pandas/core/config.py
index 09c1a5f37383d..dff60162c42ce 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -9,6 +9,8 @@
This module supports the following requirements:
- options are referenced using keys in dot.notation, e.g. "x.y.option - z".
+- keys are case-insensitive.
+- functions should accept partial/regex keys, when unambiguous.
- options can be registered by modules at import time.
- options can be registered at init-time (via core.config_init)
- options have a default value, and (optionally) a description and
@@ -24,6 +26,7 @@
- the user can set / get / reset or ask for the description of an option.
- a developer can register and mark an option as deprecated.
+
Implementation
==============
@@ -56,17 +59,18 @@
__deprecated_options = {} # holds deprecated option metdata
__registered_options = {} # holds registered option metdata
__global_config = {} # holds the current values for registered options
+__reserved_keys = ["all"] # keys which have a special meaning
##########################################
# User API
-def get_option(key):
+def get_option(pat):
"""Retrieves the value of the specified option
Parameters
----------
- key - str, a fully - qualified option name , e.g. "x.y.z.option"
+ pat - str/regexp which should match a single option.
Returns
-------
@@ -77,7 +81,16 @@ def get_option(key):
KeyError if no such option exists
"""
+ keys = _select_options(pat)
+ if len(keys) == 0:
+ _warn_if_deprecated(pat)
+ raise KeyError("No such keys(s)")
+ if len(keys) > 1:
+ raise KeyError("Pattern matched multiple keys")
+ key = keys[0]
+
_warn_if_deprecated(key)
+
key = _translate_key(key)
# walk the nested dict
@@ -86,12 +99,12 @@ def get_option(key):
return root[k]
-def set_option(key, value):
+def set_option(pat, value):
"""Sets the value of the specified option
Parameters
----------
- key - str, a fully - qualified option name , e.g. "x.y.z.option"
+ pat - str/regexp which should match a single option.
Returns
-------
@@ -101,6 +114,14 @@ def set_option(key, value):
------
KeyError if no such option exists
"""
+ keys = _select_options(pat)
+ if len(keys) == 0:
+ _warn_if_deprecated(pat)
+ raise KeyError("No such keys(s)")
+ if len(keys) > 1:
+ raise KeyError("Pattern matched multiple keys")
+ key = keys[0]
+
_warn_if_deprecated(key)
key = _translate_key(key)
@@ -110,31 +131,14 @@ def set_option(key, value):
# walk the nested dict
root, k = _get_root(key)
-
root[k] = value
-def _get_option_desription(key):
- """Prints the description associated with the specified option
-
- Parameters
- ----------
- key - str, a fully - qualified option name , e.g. "x.y.z.option"
-
- Returns
- -------
- None
-
- Raises
- ------
- KeyError if no such option exists
- """
- _warn_if_deprecated(key)
- key = _translate_key(key)
-
-def describe_options(pat="",_print_desc=True):
+def describe_option(pat="",_print_desc=True):
""" Prints the description for one or more registered options
+ Call with not arguments to get a listing for all registered options.
+
Parameters
----------
pat - str, a regexp pattern. All matching keys will have their
@@ -150,44 +154,47 @@ def describe_options(pat="",_print_desc=True):
is False
"""
- s=u""
- if pat in __registered_options.keys(): # exact key name?
- s = _build_option_description(pat)
- else:
- for k in sorted(__registered_options.keys()): # filter by pat
- if re.search(pat,k):
- s += _build_option_description(k)
-
- if s == u"":
+ keys = _select_options(pat)
+ if len(keys) == 0:
raise KeyError("No such keys(s)")
+ s=u""
+ for k in keys: # filter by pat
+ s += _build_option_description(k)
+
if _print_desc:
print(s)
else:
return(s)
-def reset_option(key):
- """ Reset a single option to it's default value """
- set_option(key, __registered_options[key].defval)
+def reset_option(pat):
+ """Reset one or more options to their default value.
-
-def reset_options(prefix=""):
- """ Resets all registered options to their default value
+ pass "all" as argument to reset all options.
Parameters
----------
- prefix - str, if specified only options matching `prefix`* will be reset
+ pat - str/regex if specified only options matching `prefix`* will be reset
Returns
-------
None
"""
+ keys = _select_options(pat)
- for k in __registered_options.keys():
- if k[:len(prefix)] == prefix:
- reset_option(k)
+ if pat == u"":
+ raise ValueError("You must provide a non-empty pattern")
+ if len(keys) == 0:
+ raise KeyError("No such keys(s)")
+
+ if len(keys) > 1 and len(pat)<4 and pat != "all":
+ raise ValueError("You must specify at least 4 characters "
+ "when resetting multiple keys")
+
+ for k in keys:
+ set_option(k, __registered_options[k].defval)
######################################################
# Functions for use by pandas developers, in addition to User - api
@@ -214,9 +221,12 @@ def register_option(key, defval, doc="", validator=None):
"""
+ key=key.lower()
if key in __registered_options:
raise KeyError("Option '%s' has already been registered" % key)
+ if key in __reserved_keys:
+ raise KeyError("Option '%s' is a reserved key" % key)
# the default value should be legal
if validator:
@@ -282,6 +292,8 @@ def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
KeyError - if key has already been deprecated.
"""
+ key=key.lower()
+
if key in __deprecated_options:
raise KeyError("Option '%s' has already been defined as deprecated." % key)
@@ -290,6 +302,17 @@ def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
################################
# functions internal to the module
+def _select_options(pat):
+ """returns a list of keys matching `pat`
+
+ if pat=="all", returns all registered options
+ """
+ keys = sorted(__registered_options.keys())
+ if pat == "all": # reserved key
+ return keys
+
+ return [k for k in keys if re.search(pat,k,re.I)]
+
def _get_root(key):
path = key.split(".")
@@ -301,6 +324,8 @@ def _get_root(key):
def _is_deprecated(key):
""" Returns True if the given option has been deprecated """
+
+ key = key.lower()
return __deprecated_options.has_key(key)
@@ -356,6 +381,7 @@ def _warn_if_deprecated(key):
-------
bool - True if `key` is deprecated, False otherwise.
"""
+
d = _get_deprecated_option(key)
if d:
if d.msg:
diff --git a/pandas/core/format.py b/pandas/core/format.py
index ba9058143c05f..f986e22265eaa 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -11,8 +11,7 @@
from pandas.core.common import adjoin, isnull, notnull
from pandas.core.index import MultiIndex, _ensure_index
from pandas.util import py3compat
-from pandas.core.config import get_option, set_option, \
- reset_options
+from pandas.core.config import get_option, set_option, reset_option
import pandas.core.common as com
import pandas.lib as lib
@@ -985,7 +984,7 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
set_option("print_config.encoding", encoding)
def reset_printoptions():
- reset_options("print_config.")
+ reset_option("^print_config\.")
def detect_console_encoding():
"""
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 862b0d29ffcdf..6b91fa1e78a1c 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -33,8 +33,7 @@ def test_api(self):
self.assertTrue(hasattr(pd, 'get_option'))
self.assertTrue(hasattr(pd, 'set_option'))
self.assertTrue(hasattr(pd, 'reset_option'))
- self.assertTrue(hasattr(pd, 'reset_options'))
- self.assertTrue(hasattr(pd, 'describe_options'))
+ self.assertTrue(hasattr(pd, 'describe_option'))
def test_register_option(self):
self.cf.register_option('a', 1, 'doc')
@@ -55,7 +54,7 @@ def test_register_option(self):
self.cf.register_option('k.b.c.d1', 1, 'doc')
self.cf.register_option('k.b.c.d2', 1, 'doc')
- def test_describe_options(self):
+ def test_describe_option(self):
self.cf.register_option('a', 1, 'doc')
self.cf.register_option('b', 1, 'doc2')
self.cf.deprecate_option('b')
@@ -67,31 +66,52 @@ def test_describe_options(self):
self.cf.deprecate_option('g.h',rkey="blah")
# non-existent keys raise KeyError
- self.assertRaises(KeyError, self.cf.describe_options, 'no.such.key')
+ self.assertRaises(KeyError, self.cf.describe_option, 'no.such.key')
# we can get the description for any key we registered
- self.assertTrue('doc' in self.cf.describe_options('a',_print_desc=False))
- self.assertTrue('doc2' in self.cf.describe_options('b',_print_desc=False))
- self.assertTrue('precated' in self.cf.describe_options('b',_print_desc=False))
+ self.assertTrue('doc' in self.cf.describe_option('a',_print_desc=False))
+ self.assertTrue('doc2' in self.cf.describe_option('b',_print_desc=False))
+ self.assertTrue('precated' in self.cf.describe_option('b',_print_desc=False))
- self.assertTrue('doc3' in self.cf.describe_options('c.d.e1',_print_desc=False))
- self.assertTrue('doc4' in self.cf.describe_options('c.d.e2',_print_desc=False))
+ self.assertTrue('doc3' in self.cf.describe_option('c.d.e1',_print_desc=False))
+ self.assertTrue('doc4' in self.cf.describe_option('c.d.e2',_print_desc=False))
# if no doc is specified we get a default message
# saying "description not available"
- self.assertTrue('vailable' in self.cf.describe_options('f',_print_desc=False))
- self.assertTrue('vailable' in self.cf.describe_options('g.h',_print_desc=False))
- self.assertTrue('precated' in self.cf.describe_options('g.h',_print_desc=False))
- self.assertTrue('blah' in self.cf.describe_options('g.h',_print_desc=False))
+ self.assertTrue('vailable' in self.cf.describe_option('f',_print_desc=False))
+ self.assertTrue('vailable' in self.cf.describe_option('g.h',_print_desc=False))
+ self.assertTrue('precated' in self.cf.describe_option('g.h',_print_desc=False))
+ self.assertTrue('blah' in self.cf.describe_option('g.h',_print_desc=False))
+
+ def test_case_insensitive(self):
+ self.cf.register_option('KanBAN', 1, 'doc')
+
+ self.assertTrue('doc' in self.cf.describe_option('kanbaN',_print_desc=False))
+ self.assertEqual(self.cf.get_option('kanBaN'), 1)
+ self.cf.set_option('KanBan',2)
+ self.assertEqual(self.cf.get_option('kAnBaN'), 2)
+
+
+ # gets of non-existent keys fail
+ self.assertRaises(KeyError, self.cf.get_option, 'no_such_option')
+ self.cf.deprecate_option('KanBan')
+
+ # testing warning with catch_warning was only added in 2.6
+ self.assertTrue(self.cf._is_deprecated('kAnBaN'))
+
+ def test_set_option(self):
+ self.cf.register_option('a', 1, 'doc')
+ self.cf.register_option('b.c', 'hullo', 'doc2')
+ self.cf.register_option('b.b', None, 'doc2')
def test_get_option(self):
self.cf.register_option('a', 1, 'doc')
- self.cf.register_option('b.a', 'hullo', 'doc2')
+ self.cf.register_option('b.c', 'hullo', 'doc2')
self.cf.register_option('b.b', None, 'doc2')
# gets of existing keys succeed
self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertEqual(self.cf.get_option('b.c'), 'hullo')
self.assertTrue(self.cf.get_option('b.b') is None)
# gets of non-existent keys fail
@@ -99,86 +119,86 @@ def test_get_option(self):
def test_set_option(self):
self.cf.register_option('a', 1, 'doc')
- self.cf.register_option('b.a', 'hullo', 'doc2')
+ self.cf.register_option('b.c', 'hullo', 'doc2')
self.cf.register_option('b.b', None, 'doc2')
self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertEqual(self.cf.get_option('b.c'), 'hullo')
self.assertTrue(self.cf.get_option('b.b') is None)
self.cf.set_option('a', 2)
- self.cf.set_option('b.a', 'wurld')
+ self.cf.set_option('b.c', 'wurld')
self.cf.set_option('b.b', 1.1)
self.assertEqual(self.cf.get_option('a'), 2)
- self.assertEqual(self.cf.get_option('b.a'), 'wurld')
+ self.assertEqual(self.cf.get_option('b.c'), 'wurld')
self.assertEqual(self.cf.get_option('b.b'), 1.1)
self.assertRaises(KeyError, self.cf.set_option, 'no.such.key', None)
def test_validation(self):
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
- self.cf.register_option('b.a', 'hullo', 'doc2',
+ self.cf.register_option('b.c', 'hullo', 'doc2',
validator=self.cf.is_text)
self.assertRaises(ValueError, self.cf.register_option, 'a.b.c.d2',
'NO', 'doc', validator=self.cf.is_int)
self.cf.set_option('a', 2) # int is_int
- self.cf.set_option('b.a', 'wurld') # str is_str
+ self.cf.set_option('b.c', 'wurld') # str is_str
self.assertRaises(ValueError, self.cf.set_option, 'a', None) # None not is_int
self.assertRaises(ValueError, self.cf.set_option, 'a', 'ab')
- self.assertRaises(ValueError, self.cf.set_option, 'b.a', 1)
+ self.assertRaises(ValueError, self.cf.set_option, 'b.c', 1)
def test_reset_option(self):
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
- self.cf.register_option('b.a', 'hullo', 'doc2',
+ self.cf.register_option('b.c', 'hullo', 'doc2',
validator=self.cf.is_str)
self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertEqual(self.cf.get_option('b.c'), 'hullo')
self.cf.set_option('a', 2)
- self.cf.set_option('b.a', 'wurld')
+ self.cf.set_option('b.c', 'wurld')
self.assertEqual(self.cf.get_option('a'), 2)
- self.assertEqual(self.cf.get_option('b.a'), 'wurld')
+ self.assertEqual(self.cf.get_option('b.c'), 'wurld')
self.cf.reset_option('a')
self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.a'), 'wurld')
- self.cf.reset_option('b.a')
+ self.assertEqual(self.cf.get_option('b.c'), 'wurld')
+ self.cf.reset_option('b.c')
self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertEqual(self.cf.get_option('b.c'), 'hullo')
- def test_reset_options(self):
+ def test_reset_option_all(self):
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
- self.cf.register_option('b.a', 'hullo', 'doc2',
+ self.cf.register_option('b.c', 'hullo', 'doc2',
validator=self.cf.is_str)
self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertEqual(self.cf.get_option('b.c'), 'hullo')
self.cf.set_option('a', 2)
- self.cf.set_option('b.a', 'wurld')
+ self.cf.set_option('b.c', 'wurld')
self.assertEqual(self.cf.get_option('a'), 2)
- self.assertEqual(self.cf.get_option('b.a'), 'wurld')
+ self.assertEqual(self.cf.get_option('b.c'), 'wurld')
- self.cf.reset_options()
+ self.cf.reset_option("all")
self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertEqual(self.cf.get_option('b.c'), 'hullo')
def test_deprecate_option(self):
import sys
- self.cf.deprecate_option('c') # we can deprecate non-existent options
+ self.cf.deprecate_option('foo') # we can deprecate non-existent options
# testing warning with catch_warning was only added in 2.6
if sys.version_info[:2]<(2,6):
raise nose.SkipTest()
- self.assertTrue(self.cf._is_deprecated('c'))
+ self.assertTrue(self.cf._is_deprecated('foo'))
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
try:
- self.cf.get_option('c')
+ self.cf.get_option('foo')
except KeyError:
pass
else:
@@ -188,8 +208,8 @@ def test_deprecate_option(self):
self.assertTrue('deprecated' in str(w[-1])) # we get the default message
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
- self.cf.register_option('b.a', 'hullo', 'doc2')
- self.cf.register_option('c', 'hullo', 'doc2')
+ self.cf.register_option('b.c', 'hullo', 'doc2')
+ self.cf.register_option('foo', 'hullo', 'doc2')
self.cf.deprecate_option('a', removal_ver='nifty_ver')
with warnings.catch_warnings(record=True) as w:
@@ -202,10 +222,10 @@ def test_deprecate_option(self):
self.assertRaises(KeyError, self.cf.deprecate_option, 'a') # can't depr. twice
- self.cf.deprecate_option('b.a', 'zounds!')
+ self.cf.deprecate_option('b.c', 'zounds!')
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
- self.cf.get_option('b.a')
+ self.cf.get_option('b.c')
self.assertEqual(len(w), 1) # should have raised one warning
self.assertTrue('zounds!' in str(w[-1])) # we get the custom message
@@ -252,8 +272,8 @@ def test_config_prefix(self):
self.assertEqual(self.cf.get_option('base.a'), 3)
self.assertEqual(self.cf.get_option('base.b'), 4)
- self.assertTrue('doc1' in self.cf.describe_options('base.a',_print_desc=False))
- self.assertTrue('doc2' in self.cf.describe_options('base.b',_print_desc=False))
+ self.assertTrue('doc1' in self.cf.describe_option('base.a',_print_desc=False))
+ self.assertTrue('doc2' in self.cf.describe_option('base.b',_print_desc=False))
self.cf.reset_option('base.a')
self.cf.reset_option('base.b')
| - Only 4 functions are now defined `get`, `set`, `reset`, `describe`
- All functions accept a regexp as input and will
try to do the right thing accordingly.
For example: get_option("max_rows") now returns the value of print_config.max_rows.
- keys are now case-insensitive.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2376 | 2012-11-28T21:06:12Z | 2012-11-29T16:44:55Z | null | 2012-11-29T16:44:55Z |
BLD: fix travis.yml, print is a function in 3.x | diff --git a/.travis.yml b/.travis.yml
index 87da143d9d612..7568ec0763366 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -35,4 +35,4 @@ script:
- 'if [ x"$VBENCH" != x"true" ]; then nosetests --exe -w /tmp -A "not slow" pandas; fi'
- pwd
- 'if [ x"$VBENCH" == x"true" ]; then pip install sqlalchemy git+git://github.com/pydata/vbench.git; python vb_suite/perf_HEAD.py; fi'
- - python -c "import numpy;print numpy.version.version"
+ - python -c "import numpy;print(numpy.version.version)"
| https://api.github.com/repos/pandas-dev/pandas/pulls/2375 | 2012-11-28T18:24:34Z | 2012-11-28T18:25:35Z | 2012-11-28T18:25:35Z | 2012-11-28T18:25:35Z | |
Incorporate a vbench run into Travis-CI | diff --git a/.travis.yml b/.travis.yml
index 3cfb4af167038..87da143d9d612 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -5,20 +5,34 @@ python:
- 2.7
- 3.1
- 3.2
+ # - 3.3
+
+matrix:
+ include:
+ - python: 2.7
+ env: VBENCH=true
+
+ allow_failures:
+ # - python: 3.3 # until travis yograde to 1.8.4
+ - python: 2.7
+ env: VBENCH=true
install:
- - export PYTHONIOENCODING=utf8 # activate venv 1.8.4 "detach" fix
- virtualenv --version
- whoami
- pwd
+ - echo $VBENCH
# install 1.7.0b2 for 3.3, and pull a version of numpy git master
# with a alternate fix for detach bug as a temporary workaround
# for the others.
- - "if [ $TRAVIS_PYTHON_VERSION == '3.3' ]; then pip uninstall numpy; pip install http://downloads.sourceforge.net/project/numpy/NumPy/1.7.0b2/numpy-1.7.0b2.tar.gz; fi"
- - "if [ $TRAVIS_PYTHON_VERSION == '3.2' ] || [ $TRAVIS_PYTHON_VERSION == '3.1' ]; then pip install --use-mirrors git+git://github.com/numpy/numpy.git@089bfa5865cd39e2b40099755e8563d8f0d04f5f#egg=numpy; fi"
- - "if [ ${TRAVIS_PYTHON_VERSION:0:1} == '2' ]; then pip install numpy; fi" # should be nop if pre-installed
- - pip install --use-mirrors cython nose pytz python-dateutil
+ - 'if [ $TRAVIS_PYTHON_VERSION == "3.3" ]; then pip uninstall numpy; pip install http://downloads.sourceforge.net/project/numpy/NumPy/1.7.0b2/numpy-1.7.0b2.tar.gz; fi'
+ - 'if [ $TRAVIS_PYTHON_VERSION == "3.2" ] || [ $TRAVIS_PYTHON_VERSION == "3.1" ]; then pip install --use-mirrors git+git://github.com/numpy/numpy.git@089bfa5865cd39e2b40099755e8563d8f0d04f5f#egg=numpy; fi'
+ - 'if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ]; then pip install numpy; fi' # should be nop if pre-installed
+ - pip install --use-mirrors cython nose pytz python-dateutil;
script:
- - python setup.py build_ext install
- - nosetests --exe -w /tmp -A "not slow" pandas
+ - python setup.py build_ext install
+ - 'if [ x"$VBENCH" != x"true" ]; then nosetests --exe -w /tmp -A "not slow" pandas; fi'
+ - pwd
+ - 'if [ x"$VBENCH" == x"true" ]; then pip install sqlalchemy git+git://github.com/pydata/vbench.git; python vb_suite/perf_HEAD.py; fi'
+ - python -c "import numpy;print numpy.version.version"
diff --git a/vb_suite/measure_memory_consumption.py b/vb_suite/measure_memory_consumption.py
new file mode 100755
index 0000000000000..cdc2fe0d4b1f1
--- /dev/null
+++ b/vb_suite/measure_memory_consumption.py
@@ -0,0 +1,54 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from __future__ import print_function
+
+"""Short one-line summary
+
+long summary
+"""
+
+def main():
+ import shutil
+ import tempfile
+ import warnings
+
+ from pandas import Series
+
+ from vbench.api import BenchmarkRunner
+ from suite import (REPO_PATH, BUILD, DB_PATH, PREPARE,
+ dependencies, benchmarks)
+
+ from memory_profiler import memory_usage
+
+ warnings.filterwarnings('ignore',category=FutureWarning)
+
+ try:
+ TMP_DIR = tempfile.mkdtemp()
+ runner = BenchmarkRunner(benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
+ TMP_DIR, PREPARE, always_clean=True,
+ # run_option='eod', start_date=START_DATE,
+ module_dependencies=dependencies)
+ results = {}
+ for b in runner.benchmarks:
+ k=b.name
+ try:
+ vs=memory_usage((b.run,))
+ v = max(vs)
+ #print(k, v)
+ results[k]=v
+ except Exception as e:
+ print("Exception caught in %s\n" % k)
+ print(str(e))
+
+ s=Series(results)
+ s.sort()
+ print((s))
+
+ finally:
+ shutil.rmtree(TMP_DIR)
+
+
+
+if __name__ == "__main__":
+ main()
diff --git a/vb_suite/perf_HEAD.py b/vb_suite/perf_HEAD.py
new file mode 100755
index 0000000000000..a5bfa2a12856b
--- /dev/null
+++ b/vb_suite/perf_HEAD.py
@@ -0,0 +1,123 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from __future__ import print_function
+
+"""Short one-line summary
+
+long summary
+"""
+
+import urllib2
+import json
+
+import pandas as pd
+
+def dump_as_gist(data,desc="The Commit"):
+ content=dict(version="0.1",timings=data)
+ payload=dict(description=desc,
+ public=True,
+ files={'results.txt': dict(content=json.dumps(content))})
+ try:
+ r=urllib2.urlopen("https://api.github.com/gists", json.dumps(payload))
+ if 200 <=r.getcode() <300:
+ print("\n\n"+ "-"*80)
+
+ gist = json.loads(r.read())
+ file_raw_url = gist['files'].items()[0][1]['raw_url']
+ print("[vbench-gist-raw_url] %s" % file_raw_url)
+ print("[vbench-html-url] %s" % gist['html_url'])
+ print("[vbench-api-url] %s" % gist['url'])
+
+ print("-"*80 +"\n\n")
+ else:
+ print("api.gothub.com returned status %d" % r.getcode())
+ except:
+ print("Error occured while dumping to gist")
+
+def main():
+ import warnings
+ from suite import benchmarks
+
+ warnings.filterwarnings('ignore',category=FutureWarning)
+
+ results=[]
+ for b in benchmarks:
+ try:
+ d=b.run()
+ d.update(dict(name=b.name))
+ results.append(d)
+ msg="{name:<40}: {timing:> 10.4f} [ms]"
+ print(msg.format(name=results[-1]['name'],
+ timing=results[-1]['timing']))
+
+ except Exception as e:
+ if (type(e) == KeyboardInterrupt or
+ 'KeyboardInterrupt' in str(d)) :
+ raise KeyboardInterrupt()
+
+ msg="{name:<40}: ERROR:\n<-------"
+ print(msg.format(name=results[-1]['name']))
+ if isinstance(d,dict):
+ if d['succeeded']:
+ print("\nException:\n%s\n" % str(e))
+ else:
+ for k,v in sorted(d.iteritems()):
+ print("{k}: {v}".format(k=k,v=v))
+
+ print("------->\n")
+
+ dump_as_gist(results,"testing")
+
+def get_vbench_log(build_url):
+ r=urllib2.urlopen(build_url)
+ if not (200 <= r.getcode() < 300):
+ return
+
+ s=json.loads(r.read())
+ s=[x for x in s['matrix'] if x['config'].get('env')]
+ #s=[x for x in s['matrix']]
+ if not s:
+ return
+ id=s[0]['id']
+ r2=urllib2.urlopen("https://api.travis-ci.org/jobs/%s" % id)
+ if (not 200 <= r.getcode() < 300):
+ return
+ s2=json.loads(r2.read())
+ return s2.get('log')
+
+def get_results_raw_url(build):
+ import re
+ log=get_vbench_log("https://api.travis-ci.org/builds/%s" % build)
+ l=[x.strip() for x in log.split("\n") if re.match(".vbench-gist-raw_url",x)]
+ if l:
+ s=l[0]
+ m = re.search("(https://[^\s]+)",s)
+ if m:
+ return m.group(0)
+
+def get_build_results(build):
+
+ r_url=get_results_raw_url(build)
+ if not r_url:
+ return
+ res=json.loads(urllib2.urlopen(r_url).read())
+ timings=res.get("timings")
+ if not timings:
+ return
+ res=[x for x in timings if x.get('succeeded')]
+ df = pd.DataFrame(res)
+ df = df.set_index("name")
+ return df
+
+# builds=[3393087,3393105,3393122,3393125,3393130]
+# dfs=[get_build_results(x) for x in builds]
+# dfs2=[x[['timing']] for x in dfs]
+# for df,b in zip(dfs2,builds):
+# df.columns=[str(b)]
+# df = dfs2[0]
+# for other in dfs2[1:]:
+# df=df.join(other,how='outer')
+
+if __name__ == "__main__":
+ main()
| A seperate 2.7 env is spawned, which is allowed to fail without
failing the entire build. The results are posted to a public gist as
a json file, and the gist url is printed in the log.
A helper function in vb_suite/perf_HEAD.py accepts a travis
build number, It then locates the VBENCH job within the build, fetches the log
through the Travis API, extract the gist raw_url from the log, fetches the results
as json and converts it into a pandas dataframe.
It remains to be seen whether the results are consistent
enough between builds over time to actually make this useful.
Here's how to gather the results from a few consecutive test builds I ran:
``` python
%cd vb_suite
import pandas as pd
from perf_HEAD import get_build_results
builds=[3393087,3393105,3393122,3393125,3393130]
dfs=[get_build_results(x) for x in builds]
dfs2=[x[['timing']] for x in dfs]
for df,b in zip(dfs2,builds):
df.columns=[str(b)]
df = dfs2[0]
for other in dfs2[1:]:
df=df.join(other,how='outer')
df[:20]
3393087 3393105 3393122 3393125 3393130
name
append_frame_single_homogenous 0.641988 0.340924 0.524198 0.400884 0.373756
append_frame_single_mixed 2.159144 1.354964 1.639278 1.421034 1.301402
concat_series_axis1 131.367922 56.542015 92.097044 75.879002 71.159506
ctor_index_array_string 0.009129 0.004975 0.005761 0.006048 0.007652
dataframe_get_value 0.001885 0.001201 0.001403 0.001305 0.001331
dataframe_getitem_scalar 0.005543 0.003445 0.003394 0.003432 0.003307
dataframe_reindex_columns 0.494599 0.233001 0.370013 0.334528 0.316710
dataframe_reindex_daterange 0.683607 0.329319 0.552313 0.506124 0.439592
datamatrix_getitem_scalar 0.004519 0.003790 0.003355 0.003436 0.003317
datetimeindex_add_offset 0.439327 0.227953 0.327404 0.410341 0.312527
datetimeindex_normalize 73.160005 51.629400 60.980701 59.092712 58.182502
frame_boolean_row_select 0.769411 0.596314 0.723754 0.564437 0.653430
frame_constructor_ndarray 0.064187 0.028872 0.033200 0.033250 0.042343
frame_ctor_list_of_dict 140.382051 86.547899 114.851952 127.656937 114.904165
frame_ctor_nested_dict 124.281883 73.726416 97.840071 106.689930 111.264944
frame_ctor_nested_dict_int64 229.266882 130.098104 143.503904 183.116913 152.652025
frame_fancy_lookup 2.752171 2.173202 2.223320 1.820600 2.148161
frame_fancy_lookup_all 135.908127 113.823605 117.978096 108.165979 113.741159
frame_fillna_inplace 25.430799 16.801000 20.938993 24.916101 21.809912
frame_fillna_many_columns_pad 23.508501 16.926599 20.899606 19.706178 17.867088
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2373 | 2012-11-28T03:03:09Z | 2012-11-28T04:35:39Z | null | 2014-07-21T09:13:49Z |
DOC: mention new options API in whatsnew 0.10.0 | diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 8a5652523dfda..ff4a4d53ff425 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -18,20 +18,33 @@ API changes
.. ipython:: python
def f(x):
- return Series([ x, x**2 ], index = ['x', 'x^s'])
+ return Series([ x, x**2 ], index = ['x', 'x^s'])
s = Series(np.random.rand(5))
- s
+ s
s.apply(f)
This is conceptually similar to the following.
.. ipython:: python
- concat([ f(y) for x, y in s.iteritems() ], axis=1).T
+ concat([ f(y) for x, y in s.iteritems() ], axis=1).T
+ - New API functions for working with pandas options (GH2097_):
+
+ - ``get_option`` / ``set_option`` - get/set the value of an option.
+ - ``reset_option`` / ``reset_options`` - reset an options / all options to their default value.
+ - ``describe_options`` - print a description of one or more option. When called with no arguments. print all registered options.
+ - ``set_printoptions`` is now deprecated (but functioning), the print options now live under "print_config.XYZ". For example:
+
+
+ .. ipython:: python
+
+ import pandas as pd
+ pd.get_option("print_config.max_rows")
See the `full release notes
<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
on GitHub for a complete list.
.. _GH2316: https://github.com/pydata/pandas/issues/2316
+.. _GH2097: https://github.com/pydata/pandas/issues/2097
| https://api.github.com/repos/pandas-dev/pandas/pulls/2372 | 2012-11-27T23:15:05Z | 2012-11-27T23:15:15Z | 2012-11-27T23:15:15Z | 2012-11-27T23:15:15Z | |
Pytablesv4 | diff --git a/RELEASE.rst b/RELEASE.rst
index a34d5f0d4b58f..171b0e8bd2694 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -34,6 +34,7 @@ pandas 0.10.0
- Grouped histogram via `by` keyword in Series/DataFrame.hist (#2186)
- Support optional ``min_periods`` keyword in ``corr`` and ``cov``
for both Series and DataFrame (#2002)
+ - Add docs for ``HDFStore table`` format
**API Changes**
@@ -55,6 +56,11 @@ pandas 0.10.0
- Add ``normalize`` option to Series/DataFrame.asfreq (#2137)
- SparseSeries and SparseDataFrame construction from empty and scalar
values now no longer create dense ndarrays unnecessarily (#2322)
+ - Support multiple query selection formats for ``HDFStore tables`` (#1996)
+ - Support ``del store['df']`` syntax to delete HDFStores
+ - Add multi-dtype support for ``HDFStore tables``
+ - ``min_itemsize`` parameter can be specified in ``HDFStore table`` creation
+ - Indexing support in ``HDFStore tables`` (#698)
**Bug fixes**
@@ -72,6 +78,9 @@ pandas 0.10.0
- Respect dtype=object in DataFrame constructor (#2291)
- Fix DatetimeIndex.join bug with tz-aware indexes and how='outer' (#2317)
- pop(...) and del works with DataFrame with duplicate columns (#2349)
+ - Deleting of consecutive rows in ``HDFStore tables``` is much faster than before
+ - Appending on a HDFStore would fail if the table was not first created via ``put``
+
pandas 0.9.1
============
diff --git a/doc/source/io.rst b/doc/source/io.rst
index f74120ad7ef57..1108f9ca7ef83 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1,3 +1,4 @@
+
.. _io:
.. currentmodule:: pandas
@@ -793,17 +794,123 @@ Objects can be written to the file just like adding key-value pairs to a dict:
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
+ # store.put('s', s') is an equivalent method
store['s'] = s
+
store['df'] = df
+
store['wp'] = wp
+
+ # the type of stored data
+ store.handle.root.wp._v_attrs.pandas_type
+
store
In a current or later Python session, you can retrieve stored objects:
.. ipython:: python
+ # store.get('df') is an equivalent method
store['df']
+Deletion of the object specified by the key
+
+.. ipython:: python
+
+ # store.remove('wp') is an equivalent method
+ del store['wp']
+
+ store
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
+
+
+These stores are **not** appendable once written (though you can simply remove them and rewrite). Nor are they **queryable**; they must be retrieved in their entirety.
+
+
+Storing in Table format
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``HDFStore`` supports another ``PyTables`` format on disk, the ``table`` format. Conceptually a ``table`` is shaped
+very much like a DataFrame, with rows and columns. A ``table`` may be appended to in the same or other sessions.
+In addition, delete & query type operations are supported. You can create an index with ``create_table_index``
+after data is already in the table (this may become automatic in the future or an option on appending/putting a ``table``).
+
+.. ipython:: python
+ :suppress:
+ :okexcept:
+
+ os.remove('store.h5')
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ df1 = df[0:4]
+ df2 = df[4:]
+ store.append('df', df1)
+ store.append('df', df2)
+ store.append('wp', wp)
+ store
+
+ store.select('df')
+
+ # the type of stored data
+ store.handle.root.df._v_attrs.pandas_type
+
+ store.create_table_index('df')
+ store.handle.root.df.table
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
+
+
+Querying objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``select`` and ``delete`` operations have an optional criteria that can be specified to select/delete only
+a subset of the data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
+
+A query is specified using the ``Term`` class under the hood.
+
+ - 'index' and 'column' are supported indexers of a DataFrame
+ - 'major_axis' and 'minor_axis' are supported indexers of the Panel
+
+Valid terms can be created from ``dict, list, tuple, or string``. Objects can be embeded as values. Allowed operations are: ``<, <=, >, >=, =``. ``=`` will be inferred as an implicit set operation (e.g. if 2 or more values are provided). The following are all valid terms.
+
+ - ``dict(field = 'index', op = '>', value = '20121114')``
+ - ``('index', '>', '20121114')``
+ - ``'index>20121114'``
+ - ``('index', '>', datetime(2012,11,14))``
+ - ``('index', ['20121114','20121115'])``
+ - ``('major', '=', Timestamp('2012/11/14'))``
+ - ``('minor_axis', ['A','B'])``
+
+Queries are built up using a list of ``Terms`` (currently only **anding** of terms is supported). An example query for a panel might be specified as follows.
+``['major_axis>20000102', ('minor_axis', '=', ['A','B']) ]``. This is roughly translated to: `major_axis must be greater than the date 20000102 and the minor_axis must be A or B`
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ store.append('wp',wp)
+ store.select('wp',[ 'major_axis>20000102', ('minor_axis', '=', ['A','B']) ])
+
+Delete from objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. ipython:: python
+
+ store.remove('wp', 'index>20000102' )
+ store.select('wp')
+
.. ipython:: python
:suppress:
@@ -811,9 +918,27 @@ In a current or later Python session, you can retrieve stored objects:
import os
os.remove('store.h5')
+Notes & Caveats
+~~~~~~~~~~~~~~~
+
+ - Selection by items (the top level panel dimension) is not possible; you always get all of the items in the returned Panel
+ - ``PyTables`` only supports fixed-width string columns in ``tables``. The sizes of a string based indexing column (e.g. *index* or *minor_axis*) are determined as the maximum size of the elements in that axis or by passing the ``min_itemsize`` on the first table creation. If subsequent appends introduce elements in the indexing axis that are larger than the supported indexer, an Exception will be raised (otherwise you could have a silent truncation of these indexers, leading to loss of information).
+ - Once a ``table`` is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can be appended
+ - You can not append/select/delete to a non-table (table creation is determined on the first append, or by passing ``table=True`` in a put operation)
+
+Performance
+~~~~~~~~~~~
+
+ - ``Tables`` come with a performance penalty as compared to regular stores. The benefit is the ability to append/delete and query (potentially very large amounts of data).
+ Write times are generally longer as compared with regular stores. Query times can be quite fast, especially on an indexed axis.
+ - ``Tables`` can (as of 0.10.0) be expressed as different types.
-.. Storing in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~
+ - ``AppendableTable`` which is a similiar table to past versions (this is the default).
+ - ``WORMTable`` (pending implementation) - is available to faciliate very fast writing of tables that are also queryable (but CANNOT support appends)
-.. Querying objects stored in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ - To delete a lot of data, it is sometimes better to erase the table and rewrite it. ``PyTables`` tends to increase the file size with deletions
+ - In general it is best to store Panels with the most frequently selected dimension in the minor axis and a time/date like dimension in the major axis, but this is not required. Panels can have any major_axis and minor_axis type that is a valid Panel indexer.
+ - No dimensions are currently indexed automagically (in the ``PyTables`` sense); these require an explict call to ``create_table_index``
+ - ``Tables`` offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
+ use the pytables utilities ``ptrepack`` to rewrite the file (and also can change compression methods)
+ - Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index ff4a4d53ff425..401b2c661460f 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -9,6 +9,95 @@ enhancements along with a large number of bug fixes.
New features
~~~~~~~~~~~~
+Updated PyTables Support
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Docs for PyTables ``Table`` format & several enhancements to the api. Here is a taste of what to expect.
+
+`the full docs for tables
+<https://github.com/pydata/pandas/blob/master/io.html#hdf5-pytables>`__
+
+
+ .. ipython:: python
+ :suppress:
+ :okexcept:
+
+ os.remove('store.h5')
+
+ .. ipython:: python
+
+ store = HDFStore('store.h5')
+ df = DataFrame(randn(8, 3), index=date_range('1/1/2000', periods=8),
+ columns=['A', 'B', 'C'])
+ df
+
+ # appending data frames
+ df1 = df[0:4]
+ df2 = df[4:]
+ store.append('df', df1)
+ store.append('df', df2)
+ store
+
+ # selecting the entire store
+ store.select('df')
+
+ .. ipython:: python
+
+ from pandas.io.pytables import Term
+ wp = Panel(randn(2, 5, 4), items=['Item1', 'Item2'],
+ major_axis=date_range('1/1/2000', periods=5),
+ minor_axis=['A', 'B', 'C', 'D'])
+ wp
+
+ # storing a panel
+ store.append('wp',wp)
+
+ # selecting via A QUERY
+ store.select('wp',
+ [ Term('major_axis>20000102'), Term('minor_axis', '=', ['A','B']) ])
+
+ # removing data from tables
+ store.remove('wp', [ 'major_axis', '>', wp.major_axis[3] ])
+ store.select('wp')
+
+ # deleting a store
+ del store['df']
+ store
+
+ **Enhancements**
+
+ - added multi-dtype support!
+
+ .. ipython:: python
+
+ df['string'] = 'string'
+ df['int'] = 1
+
+ store.append('df',df)
+ df1 = store.select('df')
+ df1
+ df1.get_dtype_counts()
+
+ - performance improvments on table writing
+ - support for arbitrarily indexed dimensions
+
+ **Bug Fixes**
+
+ - added ``Term`` method of specifying where conditions, closes GH #1996
+ - ``del store['df']`` now call ``store.remove('df')`` for store deletion
+ - deleting of consecutive rows is much faster than before
+ - ``min_itemsize`` parameter can be specified in table creation to force a minimum size for indexing columns
+ (the previous implementation would set the column size based on the first append)
+ - indexing support via ``create_table_index`` (requires PyTables >= 2.3), close GH #698
+ - appending on a store would fail if the table was not first created via ``put``
+ - minor change to select and remove: require a table ONLY if where is also provided (and not None)
+
+ .. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
API changes
~~~~~~~~~~~
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 41120376c5e90..1261ebbc93618 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -7,6 +7,9 @@
from datetime import datetime, date
import time
+import re
+import copy
+import itertools
import numpy as np
from pandas import (
@@ -19,14 +22,17 @@
from pandas.core.algorithms import match, unique
from pandas.core.categorical import Factor
-from pandas.core.common import _asarray_tuplesafe
-from pandas.core.internals import BlockManager, make_block
+from pandas.core.common import _asarray_tuplesafe, _try_sort
+from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.reshape import block2d_to_block3d
import pandas.core.common as com
+from pandas.tools.merge import concat
import pandas.lib as lib
from contextlib import contextmanager
+import pandas._pytables as pylib
+
# reading and writing the full object in one go
_TYPE_MAP = {
Series: 'series',
@@ -67,12 +73,20 @@
# oh the troubles to reduce import time
_table_mod = None
+_table_supports_index = False
def _tables():
global _table_mod
+ global _table_supports_index
if _table_mod is None:
import tables
_table_mod = tables
+
+ # version requirements
+ major, minor, subv = tables.__version__.split('.')
+ if int(major) >= 2 and int(minor) >= 3:
+ _table_supports_index = True
+
return _table_mod
@@ -188,6 +202,9 @@ def __getitem__(self, key):
def __setitem__(self, key, value):
self.put(key, value)
+ def __delitem__(self, key):
+ return self.remove(key)
+
def __contains__(self, key):
return hasattr(self.handle.root, key)
@@ -201,10 +218,16 @@ def __repr__(self):
keys = []
values = []
for k, v in sorted(self.handle.root._v_children.iteritems()):
- kind = v._v_attrs.pandas_type
+ kind = getattr(v._v_attrs,'pandas_type',None)
keys.append(str(k))
- values.append(_NAME_MAP[kind])
+
+ if kind is None:
+ values.append('unknown type')
+ elif _is_table_type(v):
+ values.append(str(create_table(self, v)))
+ else:
+ values.append(_NAME_MAP[kind])
output += adjoin(5, keys, values)
else:
@@ -295,33 +318,17 @@ def select(self, key, where=None):
Parameters
----------
key : object
- where : list, optional
-
- Must be a list of dict objects of the following forms. Selection can
- be performed on the 'index' or 'column' fields.
-
- Comparison op
- {'field' : 'index',
- 'op' : '>=',
- 'value' : value}
-
- Match single value
- {'field' : 'index',
- 'value' : v1}
-
- Match a set of values
- {'field' : 'index',
- 'value' : [v1, v2, v3]}
+ where : list of Term (or convertable) objects, optional
"""
group = getattr(self.handle.root, key, None)
- if 'table' not in group._v_attrs.pandas_type:
- raise Exception('can only select on objects written as tables')
+ if where is not None and not _is_table_type(group):
+ raise Exception('can only select with where on objects written as tables')
if group is not None:
return self._read_group(group, where)
def put(self, key, value, table=False, append=False,
- compression=None):
+ compression=None, **kwargs):
"""
Store object in HDFStore
@@ -342,7 +349,7 @@ def put(self, key, value, table=False, append=False,
be used.
"""
self._write_to_group(key, value, table=table, append=append,
- comp=compression)
+ comp=compression, **kwargs)
def _get_handler(self, op, kind):
return getattr(self, '_%s_%s' % (op, kind))
@@ -359,18 +366,23 @@ def remove(self, key, where=None):
For Table node, delete specified rows. See HDFStore.select for more
information
- Parameters
- ----------
- key : object
+ Returns
+ -------
+ number of rows removed (or None if not a Table)
+
"""
if where is None:
self.handle.removeNode(self.handle.root, key, recursive=True)
else:
group = getattr(self.handle.root, key, None)
if group is not None:
- self._delete_from_table(group, where)
+ if not _is_table_type(group):
+ raise Exception('can only remove with where on objects written as tables')
+ t = create_table(self, group)
+ return t.delete(where)
+ return None
- def append(self, key, value):
+ def append(self, key, value, **kwargs):
"""
Append to Table in file. Node must already exist and be Table
format.
@@ -385,10 +397,33 @@ def append(self, key, value):
Does *not* check if data being appended overlaps with existing
data in the table, so be careful
"""
- self._write_to_group(key, value, table=True, append=True)
+ self._write_to_group(key, value, table=True, append=True, **kwargs)
+
+ def create_table_index(self, key, **kwargs):
+ """ Create a pytables index on the table
+ Paramaters
+ ----------
+ key : object (the node to index)
+
+ Exceptions
+ ----------
+ raises if the node is not a table
+
+ """
+
+ # version requirements
+ if not _table_supports_index:
+ raise("PyTables >= 2.3 is required for table indexing")
+
+ group = getattr(self.handle.root, key, None)
+ if group is None: return
+
+ if not _is_table_type(group):
+ raise Exception("cannot create table index on a non-table")
+ create_table(self, group).create_index(**kwargs)
def _write_to_group(self, key, value, table=False, append=False,
- comp=None):
+ comp=None, **kwargs):
root = self.handle.root
if key not in root._v_children:
group = self.handle.createGroup(root, key)
@@ -400,7 +435,7 @@ def _write_to_group(self, key, value, table=False, append=False,
kind = '%s_table' % kind
handler = self._get_handler(op='write', kind=kind)
wrapper = lambda value: handler(group, value, append=append,
- comp=comp)
+ comp=comp, **kwargs)
else:
if append:
raise ValueError('Can only append to Tables')
@@ -531,18 +566,10 @@ def _read_block_manager(self, group):
return BlockManager(blocks, axes)
- def _write_frame_table(self, group, df, append=False, comp=None):
- mat = df.values
- values = mat.reshape((1,) + mat.shape)
-
- if df._is_mixed_type:
- raise Exception('Cannot currently store mixed-type DataFrame '
- 'objects in Table format')
-
- self._write_table(group, items=['value'],
- index=df.index, columns=df.columns,
- values=values, append=append, compression=comp)
-
+ def _write_frame_table(self, group, df, append=False, comp=None, **kwargs):
+ t = create_table(self, group, typ = 'appendable_frame')
+ t.write(axes_to_index=[0], obj=df, append=append, compression=comp, **kwargs)
+
def _write_wide(self, group, panel):
panel._consolidate_inplace()
self._write_block_manager(group, panel._data)
@@ -550,13 +577,14 @@ def _write_wide(self, group, panel):
def _read_wide(self, group, where=None):
return Panel(self._read_block_manager(group))
- def _write_wide_table(self, group, panel, append=False, comp=None):
- self._write_table(group, items=panel.items, index=panel.major_axis,
- columns=panel.minor_axis, values=panel.values,
- append=append, compression=comp)
-
+ def _write_wide_table(self, group, panel, append=False, comp=None, **kwargs):
+ t = create_table(self, group, typ = 'appendable_panel')
+ t.write(axes_to_index=[1,2], obj=panel,
+ append=append, compression=comp, **kwargs)
+
def _read_wide_table(self, group, where=None):
- return self._read_panel_table(group, where)
+ t = create_table(self, group)
+ return t.read(where)
def _write_index(self, group, key, index):
if isinstance(index, MultiIndex):
@@ -570,10 +598,10 @@ def _write_index(self, group, key, index):
self._write_sparse_intindex(group, key, index)
else:
setattr(group._v_attrs, '%s_variety' % key, 'regular')
- converted, kind, _ = _convert_index(index)
- self._write_array(group, key, converted)
+ converted = _convert_index(index).set_name('index')
+ self._write_array(group, key, converted.values)
node = getattr(group, key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = converted.kind
node._v_attrs.name = index.name
if isinstance(index, (DatetimeIndex, PeriodIndex)):
@@ -630,11 +658,11 @@ def _write_multi_index(self, group, key, index):
index.labels,
index.names)):
# write the level
- conv_level, kind, _ = _convert_index(lev)
level_key = '%s_level%d' % (key, i)
- self._write_array(group, level_key, conv_level)
+ conv_level = _convert_index(lev).set_name(level_key)
+ self._write_array(group, level_key, conv_level.values)
node = getattr(group, level_key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = conv_level.kind
node._v_attrs.name = name
# write the name
@@ -739,89 +767,6 @@ def _write_array(self, group, key, value):
getattr(group, key)._v_attrs.transposed = transposed
- def _write_table(self, group, items=None, index=None, columns=None,
- values=None, append=False, compression=None):
- """ need to check for conform to the existing table:
- e.g. columns should match """
- # create dict of types
- index_converted, index_kind, index_t = _convert_index(index)
- columns_converted, cols_kind, col_t = _convert_index(columns)
-
- # create the table if it doesn't exist (or get it if it does)
- if not append:
- if 'table' in group:
- self.handle.removeNode(group, 'table')
-
- if 'table' not in group:
- # create the table
- desc = {'index': index_t,
- 'column': col_t,
- 'values': _tables().FloatCol(shape=(len(values)))}
-
- options = {'name': 'table',
- 'description': desc}
-
- if compression:
- complevel = self.complevel
- if complevel is None:
- complevel = 9
- filters = _tables().Filters(complevel=complevel,
- complib=compression,
- fletcher32=self.fletcher32)
- options['filters'] = filters
- elif self.filters is not None:
- options['filters'] = self.filters
-
- table = self.handle.createTable(group, **options)
- else:
- # the table must already exist
- table = getattr(group, 'table', None)
-
- # check for backwards incompatibility
- if append:
- existing_kind = table._v_attrs.index_kind
- if existing_kind != index_kind:
- raise TypeError("incompatible kind in index [%s - %s]" %
- (existing_kind, index_kind))
-
- # add kinds
- table._v_attrs.index_kind = index_kind
- table._v_attrs.columns_kind = cols_kind
- if append:
- existing_fields = getattr(table._v_attrs, 'fields', None)
- if (existing_fields is not None and
- existing_fields != list(items)):
- raise Exception("appended items do not match existing items"
- " in table!")
- # this depends on creation order of the table
- table._v_attrs.fields = list(items)
-
- # add the rows
- try:
- for i, index in enumerate(index_converted):
- for c, col in enumerate(columns_converted):
- v = values[:, i, c]
-
- # don't store the row if all values are np.nan
- if np.isnan(v).all():
- continue
-
- row = table.row
- row['index'] = index
- row['column'] = col
-
- # create the values array
- row['values'] = v
- row.append()
- self.handle.flush()
- except (ValueError), detail: # pragma: no cover
- print "value_error in _write_table -> %s" % str(detail)
- try:
- self.handle.flush()
- except Exception:
- pass
- raise
-
def _read_group(self, group, where=None):
kind = group._v_attrs.pandas_type
kind = _LEGACY_MAP.get(kind, kind)
@@ -853,100 +798,786 @@ def _read_index_legacy(self, group, key):
node = getattr(group, key)
data = node[:]
kind = node._v_attrs.kind
-
return _unconvert_index_legacy(data, kind)
def _read_frame_table(self, group, where=None):
- return self._read_panel_table(group, where)['value']
+ t = create_table(self, group)
+ return t.read(where)
+
+
+class Col(object):
+ """ a column description class
+
+ Parameters
+ ----------
+
+ values : the ndarray like converted values
+ kind : a string description of this type
+ typ : the pytables type
+
+ """
+ is_indexable = True
+
+ def __init__(self, values = None, kind = None, typ = None, cname = None, itemsize = None, name = None, kind_attr = None, **kwargs):
+ self.values = values
+ self.kind = kind
+ self.typ = typ
+ self.itemsize = itemsize
+ self.name = None
+ self.cname = cname
+ self.kind_attr = None
+ self.table = None
+
+ if name is not None:
+ self.set_name(name, kind_attr)
+
+ def set_name(self, name, kind_attr = None):
+ self.name = name
+ self.kind_attr = kind_attr or "%s_kind" % name
+ if self.cname is None:
+ self.cname = name
+
+ return self
+
+ def set_table(self, table):
+ self.table = table
+ return self
+
+ def __repr__(self):
+ return "name->%s,cname->%s,kind->%s" % (self.name,self.cname,self.kind)
+
+ __str__ = __repr__
+
+ def copy(self):
+ new_self = copy.copy(self)
+ return new_self
+
+ def infer(self, table):
+ """ infer this column from the table: create and return a new object """
+ new_self = self.copy()
+ new_self.set_table(table)
+ new_self.get_attr()
+ return new_self
+
+ def convert(self, sel):
+ """ set the values from this selection """
+ self.values = _maybe_convert(sel.values[self.cname], self.kind)
+
+ @property
+ def attrs(self):
+ return self.table._v_attrs
+
+ @property
+ def description(self):
+ return self.table.description
+
+ @property
+ def pos(self):
+ """ my column position """
+ return getattr(self.col,'_v_pos',None)
+
+ @property
+ def col(self):
+ """ return my current col description """
+ return getattr(self.description,self.cname,None)
+
+ @property
+ def cvalues(self):
+ """ return my cython values """
+ return self.values
+
+ def __iter__(self):
+ return iter(self.values)
+
+ def maybe_set_size(self, min_itemsize = None, **kwargs):
+ """ maybe set a string col itemsize """
+ if self.kind == 'string' and min_itemsize is not None:
+ if self.typ.itemsize < min_itemsize:
+ self.typ = _tables().StringCol(itemsize = min_itemsize, pos = getattr(self.typ,'pos',None))
+
+ def validate_and_set(self, table, append, **kwargs):
+ self.set_table(table)
+ self.validate_col()
+ self.validate_attr(append)
+ self.set_attr()
+
+ def validate_col(self):
+ """ validate this column & set table data for it """
+
+ # validate this column for string truncation (or reset to the max size)
+ if self.kind == 'string':
+
+ c = self.col
+ if c is not None:
+ if c.itemsize < self.itemsize:
+ raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!" % (self.cname,self.itemsize,c.itemsize))
+
+
+ def validate_attr(self, append):
+ # check for backwards incompatibility
+ if append:
+ existing_kind = getattr(self.attrs,self.kind_attr,None)
+ if existing_kind is not None and existing_kind != self.kind:
+ raise TypeError("incompatible kind in col [%s - %s]" %
+ (existing_kind, self.kind))
+
+ def get_attr(self):
+ """ set the kind for this colummn """
+ self.kind = getattr(self.attrs,self.kind_attr,None)
+
+ def set_attr(self):
+ """ set the kind for this colummn """
+ setattr(self.attrs,self.kind_attr,self.kind)
+
+class DataCol(Col):
+ """ a data holding column, by definition this is not indexable
+
+ Parameters
+ ----------
+
+ data : the actual data
+ cname : the column name in the table to hold the data (typeically values)
+ """
+ is_indexable = False
+
+ @classmethod
+ def create_for_block(cls, i, **kwargs):
+ """ return a new datacol with the block i """
+ return cls(name = 'values_%d' % i, cname = 'values_block_%d' % i, **kwargs)
+
+ def __init__(self, values = None, kind = None, typ = None, cname = None, data = None, **kwargs):
+ super(DataCol, self).__init__(values = values, kind = kind, typ = typ, cname = cname, **kwargs)
+ self.dtype = None
+ self.dtype_attr = "%s_dtype" % self.name
+ self.set_data(data)
+
+ def __repr__(self):
+ return "name->%s,cname->%s,dtype->%s,shape->%s" % (self.name,self.cname,self.dtype,self.shape)
+
+ def set_data(self, data):
+ self.data = data
+ if data is not None:
+ if self.dtype is None:
+ self.dtype = data.dtype.name
+
+ @property
+ def shape(self):
+ return getattr(self.data,'shape',None)
+
+ @property
+ def cvalues(self):
+ """ return my cython values """
+ return self.data
+
+ def validate_attr(self, append):
+ """ validate that we have the same order as the existing & same dtype """
+ if append:
+ existing_fields = getattr(self.attrs, self.kind_attr, None)
+ if (existing_fields is not None and
+ existing_fields != list(self.values)):
+ raise Exception("appended items do not match existing items"
+ " in table!")
+
+ existing_dtype = getattr(self.attrs, self.dtype_attr, None)
+ if (existing_dtype is not None and
+ existing_dtype != self.dtype):
+ raise Exception("appended items dtype do not match existing items dtype"
+ " in table!")
+
+ def convert(self, sel):
+ """ set the data from this selection (and convert to the correct dtype if we can) """
+ self.set_data(sel.values[self.cname])
+
+ # convert to the correct dtype
+ if self.dtype is not None:
+ try:
+ self.data = self.data.astype(self.dtype)
+ except:
+ self.data = self.data.astype('O')
+
+ def get_attr(self):
+ """ get the data for this colummn """
+ self.values = getattr(self.attrs,self.kind_attr,None)
+ self.dtype = getattr(self.attrs,self.dtype_attr,None)
+
+ def set_attr(self):
+ """ set the data for this colummn """
+ setattr(self.attrs,self.kind_attr,self.values)
+ if self.dtype is not None:
+ setattr(self.attrs,self.dtype_attr,self.dtype)
+
+class Table(object):
+ """ represent a table:
+ facilitate read/write of various types of tables
+ this is an abstract base class
+
+ Parameters
+ ----------
+
+ parent : my parent HDFStore
+ group : the group node where the table resides
+
+ """
+ table_type = None
+ ndim = None
+ axis_names = ['index','column']
+
+ def __init__(self, parent, group):
+ self.parent = parent
+ self.group = group
+ self.index_axes = []
+ self.non_index_axes = []
+ self.values_axes = []
+ self.selection = None
+
+ @property
+ def pandas_type(self):
+ return getattr(self.group._v_attrs,'pandas_type',None)
+
+ def __repr__(self):
+ """ return a pretty representatgion of myself """
+ return "%s (typ->%s,nrows->%s)" % (self.pandas_type,self.table_type,self.nrows)
+
+ __str__ = __repr__
+
+ @property
+ def nrows(self):
+ return getattr(self.table,'nrows',None)
+
+ @property
+ def table(self):
+ """ return the table group """
+ return getattr(self.group, 'table', None)
+
+ @property
+ def handle(self):
+ return self.parent.handle
+
+ @property
+ def _quiet(self):
+ return self.parent._quiet
+
+ @property
+ def filters(self):
+ return self.parent.filters
+
+ @property
+ def complevel(self):
+ return self.parent.complevel
+
+ @property
+ def fletcher32(self):
+ return self.parent.fletcher32
+
+ @property
+ def complib(self):
+ return self.parent.complib
+
+ @property
+ def attrs(self):
+ return self.group._v_attrs
+
+ @property
+ def description(self):
+ return self.table.description
+
+ @property
+ def is_transpose(self):
+ """ does my data need transposition """
+ return False
+
+ @property
+ def axes(self):
+ return itertools.chain(self.index_axes, self.values_axes)
+
+ def kinds_map(self):
+ """ return a diction of columns -> kinds """
+ return dict([ (a.cname,a.kind) for a in self.axes ])
+
+ def index_cols(self):
+ """ return a list of my index cols """
+ return [ i.cname for i in self.index_axes ]
+
+ def values_cols(self):
+ """ return a list of my values cols """
+ return [ i.cname for i in self.values_axes ]
+
+ def set_attrs(self):
+ """ set our table type & indexables """
+ self.attrs.table_type = self.table_type
+ self.attrs.index_cols = self.index_cols()
+ self.attrs.values_cols = self.values_cols()
+ self.attrs.non_index_axes = self.non_index_axes
+
+ def validate(self):
+ """ raise if we have an incompitable table type with the current """
+ et = getattr(self.attrs,'table_type',None)
+ if et is not None and et != self.table_type:
+ raise TypeError("incompatible table_type with existing [%s - %s]" %
+ (et, self.table_type))
+ ic = getattr(self.attrs,'index_cols',None)
+ if ic is not None and ic != self.index_cols():
+ raise TypeError("incompatible index cols with existing [%s - %s]" %
+ (ic, self.index_cols()))
+
+ @property
+ def indexables(self):
+ """ create/cache the indexables if they don't exist """
+ if self._indexables is None:
+
+ d = self.description
+ self._indexables = []
+
+ # index columns
+ self._indexables.extend([ Col(name = i) for i in self.attrs.index_cols ])
+
+ # data columns
+ self._indexables.extend([ DataCol.create_for_block(i = i) for i, c in enumerate(self.attrs.values_cols) ])
+
+ return self._indexables
+
+ def create_index(self, columns = None, optlevel = None, kind = None):
+ """
+ Create a pytables index on the specified columns
+ note: cannot index Time64Col() currently; PyTables must be >= 2.3.1
+
+
+ Paramaters
+ ----------
+ columns : None or list_like (the columns to index - currently supports index/column)
+ optlevel: optimization level (defaults to 6)
+ kind : kind of index (defaults to 'medium')
+
+ Exceptions
+ ----------
+ raises if the node is not a table
+
+ """
+
+ table = self.table
+ if table is None: return
+
+ if columns is None:
+ columns = ['index']
+ if not isinstance(columns, (tuple,list)):
+ columns = [ columns ]
+
+ kw = dict()
+ if optlevel is not None:
+ kw['optlevel'] = optlevel
+ if kind is not None:
+ kw['kind'] = kind
+
+ for c in columns:
+ v = getattr(table.cols,c,None)
+ if v is not None and not v.is_indexed:
+ v.createIndex(**kw)
+
+ def read_axes(self, where):
+ """ create and return the axes sniffed from the table """
- def _read_panel_table(self, group, where=None):
- table = getattr(group, 'table')
- fields = table._v_attrs.fields
+ # infer the data kind
+ self.infer_axes()
# create the selection
- sel = Selection(table, where, table._v_attrs.index_kind)
- sel.select()
- fields = table._v_attrs.fields
+ self.selection = Selection(self, where)
+ self.selection.select()
- columns = _maybe_convert(sel.values['column'],
- table._v_attrs.columns_kind)
- index = _maybe_convert(sel.values['index'], table._v_attrs.index_kind)
- values = sel.values['values']
+ # convert the data
+ for a in self.axes:
+ a.convert(self.selection)
- major = Factor.from_array(index)
- minor = Factor.from_array(columns)
+ def infer_axes(self):
+ """ infer the axes from the indexables """
+ self.index_axes, self.values_axes = [ a.infer(self.table) for a in self.indexables if a.is_indexable ], [ a.infer(self.table) for a in self.indexables if not a.is_indexable ]
+ self.non_index_axes = getattr(self.attrs,'non_index_axes',None) or []
+
+ def create_axes(self, axes_to_index, obj, validate = True, min_itemsize = None):
+ """ create and return the axes
+ leagcy tables create an indexable column, indexable index, non-indexable fields
+
+ """
+
+ self.index_axes = []
+ self.non_index_axes = []
+
+ # create axes to index and non_index
+ j = 0
+ for i, a in enumerate(obj.axes):
+ if i in axes_to_index:
+ self.index_axes.append(_convert_index(a).set_name(self.axis_names[j]))
+ j += 1
+ else:
+ self.non_index_axes.append((i,list(a)))
+
+ # check for column conflicts
+ if validate:
+ for a in self.axes:
+ a.maybe_set_size(min_itemsize = min_itemsize)
+
+ # add my values
+ self.values_axes = []
+ for i, b in enumerate(obj._data.blocks):
+ values = b.values
+
+ # a string column
+ if b.dtype.name == 'object':
+ atom = _tables().StringCol(itemsize = values.dtype.itemsize, shape = b.shape[0])
+ utype = 'S8'
+ else:
+ atom = getattr(_tables(),"%sCol" % b.dtype.name.capitalize())(shape = b.shape[0])
+ utype = atom._deftype
+
+ # coerce data to this type
+ try:
+ values = values.astype(utype)
+ except (Exception), detail:
+ raise Exception("cannot coerce data type -> [dtype->%s]" % b.dtype.name)
+
+ dc = DataCol.create_for_block(i = i, values = list(b.items), kind = b.dtype.name, typ = atom, data = values)
+ self.values_axes.append(dc)
+
+ def create_description(self, compression = None, complevel = None):
+ """ create the description of the table from the axes & values """
+
+ d = { 'name' : 'table' }
+
+ # description from the axes & values
+ d['description'] = dict([ (a.cname,a.typ) for a in self.axes ])
+
+ if compression:
+ complevel = self.complevel
+ if complevel is None:
+ complevel = 9
+ filters = _tables().Filters(complevel=complevel,
+ complib=compression,
+ fletcher32=self.fletcher32)
+ d['filters'] = filters
+ elif self.filters is not None:
+ d['filters'] = self.filters
+ return d
+
+ def read(self, **kwargs):
+ raise NotImplementedError("cannot read on an abstract table: subclasses should implement")
+
+ def write(self, **kwargs):
+ raise NotImplementedError("cannot write on an abstract table")
+
+ def delete(self, where = None, **kwargs):
+ """ support fully deleting the node in its entirety (only) - where specification must be None """
+ if where is None:
+ self.handle.removeNode(self.group, recursive=True)
+ return None
+
+ raise NotImplementedError("cannot delete on an abstract table")
+
+class WORMTable(Table):
+ """ a write-once read-many table:
+ this format DOES NOT ALLOW appending to a table. writing is a one-time operation
+ the data are stored in a format that allows for searching the data on disk
+ """
+ table_type = 'worm'
+
+ def read(self, **kwargs):
+ """ read the indicies and the indexing array, calculate offset rows and return """
+ raise NotImplementedError("WORMTable needs to implement read")
+
+ def write(self, **kwargs):
+ """ write in a format that we can search later on (but cannot append to):
+ write out the indicies and the values using _write_array (e.g. a CArray)
+ create an indexing table so that we can search """
+ raise NotImplementedError("WORKTable needs to implement write")
+
+class LegacyTable(Table):
+ """ an appendable table:
+ allow append/query/delete operations to a (possibily) already existing appendable table
+ this table ALLOWS append (but doesn't require them), and stores the data in a format
+ that can be easily searched
+
+ """
+ _indexables = [Col(name = 'index'),Col(name = 'column', index_kind = 'columns_kind'), DataCol(name = 'fields', cname = 'values', kind_attr = 'fields') ]
+ table_type = 'legacy'
+
+ def read(self, where=None):
+ """ we have 2 indexable columns, with an arbitrary number of data axes """
+
+ self.read_axes(where)
+
+ index = self.index_axes[0].values
+ column = self.index_axes[1].values
+
+ major = Factor.from_array(index)
+ minor = Factor.from_array(column)
+
J, K = len(major.levels), len(minor.levels)
key = major.labels * K + minor.labels
+ panels = []
if len(unique(key)) == len(key):
sorter, _ = lib.groupsort_indexer(com._ensure_int64(key), J * K)
sorter = com._ensure_platform_int(sorter)
- # the data need to be sorted
- sorted_values = values.take(sorter, axis=0)
- major_labels = major.labels.take(sorter)
- minor_labels = minor.labels.take(sorter)
+ # create the panels
+ for c in self.values_axes:
+
+ # the data need to be sorted
+ sorted_values = c.data.take(sorter, axis=0)
+ major_labels = major.labels.take(sorter)
+ minor_labels = minor.labels.take(sorter)
+ items = Index(c.values)
- block = block2d_to_block3d(sorted_values, fields, (J, K),
- major_labels, minor_labels)
+ block = block2d_to_block3d(sorted_values, items, (J, K),
+ major_labels, minor_labels)
+
+ mgr = BlockManager([block], [items, major.levels, minor.levels])
+ panels.append(Panel(mgr))
- mgr = BlockManager([block], [block.ref_items,
- major.levels, minor.levels])
- wp = Panel(mgr)
else:
if not self._quiet: # pragma: no cover
print ('Duplicate entries in table, taking most recently '
'appended')
# reconstruct
- long_index = MultiIndex.from_arrays([index, columns])
- lp = DataFrame(values, index=long_index, columns=fields)
+ long_index = MultiIndex.from_arrays([index, column])
+
+ panels = []
+ for c in self.values_axes:
+ lp = DataFrame(c.data, index=long_index, columns=c.values)
+
+ # need a better algorithm
+ tuple_index = long_index._tuple_index
- # need a better algorithm
- tuple_index = long_index._tuple_index
+ unique_tuples = lib.fast_unique(tuple_index)
+ unique_tuples = _asarray_tuplesafe(unique_tuples)
- unique_tuples = lib.fast_unique(tuple_index)
- unique_tuples = _asarray_tuplesafe(unique_tuples)
+ indexer = match(unique_tuples, tuple_index)
+ indexer = com._ensure_platform_int(indexer)
- indexer = match(unique_tuples, tuple_index)
- indexer = com._ensure_platform_int(indexer)
+ new_index = long_index.take(indexer)
+ new_values = lp.values.take(indexer, axis=0)
- new_index = long_index.take(indexer)
- new_values = lp.values.take(indexer, axis=0)
+ lp = DataFrame(new_values, index=new_index, columns=lp.columns)
+ panels.append(lp.to_panel())
- lp = DataFrame(new_values, index=new_index, columns=lp.columns)
- wp = lp.to_panel()
+ # append the panels
+ wp = concat(panels, axis = 0, verify_integrity = True)
+
+ # reorder by any non_index_axes
+ for axis,labels in self.non_index_axes:
+ wp = wp.reindex_axis(labels,axis=axis,copy=False)
+
+ if self.selection.filter:
+ new_minor = sorted(set(wp.minor_axis) & self.selection.filter)
+ wp = wp.reindex(minor=new_minor, copy = False)
- if sel.column_filter:
- new_minor = sorted(set(wp.minor_axis) & sel.column_filter)
- wp = wp.reindex(minor=new_minor)
return wp
+ def write(self, axes_to_index, obj, append=False, compression=None,
+ complevel=None, min_itemsize = None, **kwargs):
+
+ # create the table if it doesn't exist (or get it if it does)
+ if not append:
+ if 'table' in self.group:
+ self.handle.removeNode(self.group, 'table')
+
+ # create the axes
+ self.create_axes(axes_to_index = axes_to_index, obj = obj, validate = append, min_itemsize = min_itemsize)
+
+ if 'table' not in self.group:
+
+ # create the table
+ options = self.create_description(compression = compression, complevel = complevel)
+
+ # set the table attributes
+ self.set_attrs()
+
+ # create the table
+ table = self.handle.createTable(self.group, **options)
+
+ else:
+
+ # the table must already exist
+ table = self.table
+
+ # validate the table
+ self.validate()
+
+ # validate the axes and set the kinds
+ for a in self.axes:
+ a.validate_and_set(table, append)
+
+ # add the rows
+ self._write_data()
+ self.handle.flush()
+
+ def _write_data(self):
+ """ fast writing of data: requires specific cython routines each axis shape """
+
+ masks = []
+
+ # create the masks
+ for a in self.values_axes:
+
+ # figure the mask: only do if we can successfully process this column, otherwise ignore the mask
+ try:
+ mask = np.isnan(a.data).all(axis=0)
+ masks.append(mask.astype('u1'))
+ except:
+
+ # need to check for Nan in a non-numeric type column!!!
+ masks.append(np.zeros((a.data.shape[1:]), dtype = 'u1'))
+
+ # consolidate masks
+ mask = masks[0]
+ for m in masks[1:]:
+ m = mask & m
+
+ # the arguments & values
+ args = [ a.cvalues for a in self.index_axes ]
+ values = [ a.data for a in self.values_axes ]
+
+ # get our function
+ try:
+ func = getattr(pylib,"create_hdf_rows_%sd" % self.ndim)
+ args.append(mask)
+ args.append(values)
+ rows = func(*args)
+ if len(rows):
+ self.table.append(rows)
+ except (Exception), detail:
+ raise Exception("tables cannot write this data -> %s" % str(detail))
+
+ def delete(self, where = None):
+ if where is None:
+ return super(LegacyTable, self).delete()
- def _delete_from_table(self, group, where = None):
- table = getattr(group, 'table')
+ # infer the data kind
+ table = self.table
+ self.infer_axes()
# create the selection
- s = Selection(table, where, table._v_attrs.index_kind)
- s.select_coords()
+ self.selection = Selection(self, where)
+ self.selection.select_coords()
# delete the rows in reverse order
- l = list(s.values)
- l.reverse()
- for c in l:
- table.removeRows(c)
- self.handle.flush()
- return len(s.values)
+ l = list(self.selection.values)
+ ln = len(l)
+
+ if ln:
+
+ # if we can do a consecutive removal - do it!
+ if l[0]+ln-1 == l[-1]:
+ table.removeRows(start = l[0], stop = l[-1]+1)
+
+ # one by one
+ else:
+ l.reverse()
+ for c in l:
+ table.removeRows(c)
+
+ self.handle.flush()
+
+ # return the number of rows removed
+ return ln
+
+
+class LegacyFrameTable(LegacyTable):
+ """ support the legacy frame table """
+ table_type = 'legacy_frame'
+ def read(self, *args, **kwargs):
+ return super(LegacyFrameTable, self).read(*args, **kwargs)['value']
+
+class LegacyPanelTable(LegacyTable):
+ """ support the legacy panel table """
+ table_type = 'legacy_panel'
+
+class AppendableTable(LegacyTable):
+ """ suppor the new appendable table formats """
+ _indexables = None
+ table_type = 'appendable'
+
+class AppendableFrameTable(AppendableTable):
+ """ suppor the new appendable table formats """
+ table_type = 'appendable_frame'
+ ndim = 2
+
+ def read(self, where=None):
+
+ self.read_axes(where)
+
+ index = Index(self.index_axes[0].values)
+ frames = []
+ for a in self.values_axes:
+ columns = Index(a.values)
+ block = make_block(a.cvalues.T, columns, columns)
+ mgr = BlockManager([ block ], [ columns, index ])
+ frames.append(DataFrame(mgr))
+ df = concat(frames, axis = 1, verify_integrity = True)
+
+ # sort the indicies & reorder the columns
+ for axis,labels in self.non_index_axes:
+ df = df.reindex_axis(labels,axis=axis,copy=False)
+ columns_ordered = df.columns
+
+ # apply the column filter (but keep columns in the same order)
+ if self.selection.filter:
+ columns = Index(set(columns_ordered) & self.selection.filter)
+ columns = sorted(columns_ordered.get_indexer(columns))
+ df = df.reindex(columns = columns_ordered.take(columns), copy = False)
+
+ else:
+ df = df.reindex(columns = columns_ordered, copy = False)
+
+ return df
+
+class AppendablePanelTable(AppendableTable):
+ """ suppor the new appendable table formats """
+ table_type = 'appendable_panel'
+ ndim = 3
+
+# table maps
+_TABLE_MAP = {
+ 'appendable_frame' : AppendableFrameTable,
+ 'appendable_panel' : AppendablePanelTable,
+ 'worm' : WORMTable,
+ 'legacy_frame' : LegacyFrameTable,
+ 'legacy_panel' : LegacyPanelTable,
+ 'default' : AppendablePanelTable,
+}
+
+def create_table(parent, group, typ = None, **kwargs):
+ """ return a suitable Table class to operate """
+
+ pt = getattr(group._v_attrs,'pandas_type',None)
+ tt = getattr(group._v_attrs,'table_type',None)
+
+ # a new node
+ if pt is None:
+
+ return (_TABLE_MAP.get(typ) or _TABLE_MAP.get('default'))(parent, group, **kwargs)
+
+ # existing node (legacy)
+ if tt is None:
+
+ # distiguish between a frame/table
+ tt = 'legacy_panel'
+ try:
+ if group.table.description.values.shape[0] == 1:
+ tt = 'legacy_frame'
+ except:
+ pass
+
+ return _TABLE_MAP.get(tt)(parent, group, **kwargs)
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif isinstance(index, (Int64Index, PeriodIndex)):
atom = _tables().Int64Col()
- return index.values, 'integer', atom
+ return Col(index.values, 'integer', atom)
if isinstance(index, MultiIndex):
raise Exception('MultiIndex not supported here!')
@@ -957,36 +1588,36 @@ def _convert_index(index):
if inferred_type == 'datetime64':
converted = values.view('i8')
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif inferred_type == 'datetime':
converted = np.array([(time.mktime(v.timetuple()) +
v.microsecond / 1E6) for v in values],
dtype=np.float64)
- return converted, 'datetime', _tables().Time64Col()
+ return Col(converted, 'datetime', _tables().Time64Col())
elif inferred_type == 'date':
converted = np.array([time.mktime(v.timetuple()) for v in values],
dtype=np.int32)
- return converted, 'date', _tables().Time32Col()
+ return Col(converted, 'date', _tables().Time32Col())
elif inferred_type == 'string':
# atom = _tables().ObjectAtom()
# return np.asarray(values, dtype='O'), 'object', atom
converted = np.array(list(values), dtype=np.str_)
itemsize = converted.dtype.itemsize
- return converted, 'string', _tables().StringCol(itemsize)
+ return Col(converted, 'string', _tables().StringCol(itemsize), itemsize = itemsize)
elif inferred_type == 'unicode':
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
elif inferred_type == 'integer':
# take a guess for now, hope the values fit
atom = _tables().Int64Col()
- return np.asarray(values, dtype=np.int64), 'integer', atom
+ return Col(np.asarray(values, dtype=np.int64), 'integer', atom)
elif inferred_type == 'floating':
atom = _tables().Float64Col()
- return np.asarray(values, dtype=np.float64), 'float', atom
+ return Col(np.asarray(values, dtype=np.float64), 'float', atom)
else: # pragma: no cover
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
def _read_array(group, key):
@@ -1093,87 +1724,225 @@ def _alias_to_class(alias):
return _reverse_index_map.get(alias, Index)
+class Term(object):
+ """ create a term object that holds a field, op, and value
+
+ Parameters
+ ----------
+ field : dict, string term expression, or the field to operate (must be a valid index/column type of DataFrame/Panel)
+ op : a valid op (defaults to '=') (optional)
+ >, >=, <, <=, =, != (not equal) are allowed
+ value : a value or list of values (required)
+ kinds : the kinds map (dict of column name -> kind)
+
+ Returns
+ -------
+ a Term object
+
+ Examples
+ --------
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('major>20121114')
+ Term('minor', ['A','B'])
+
+ """
+
+ _ops = ['<=','<','>=','>','!=','=']
+ _search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
+ _index = ['index','major_axis','major']
+ _column = ['column','minor_axis','minor']
+
+ def __init__(self, field, op = None, value = None, kinds = None):
+ self.field = None
+ self.op = None
+ self.value = None
+ self.kinds = kinds or dict()
+ self.filter = None
+ self.condition = None
+
+ # unpack lists/tuples in field
+ while(isinstance(field,(tuple,list))):
+ f = field
+ field = f[0]
+ if len(f) > 1:
+ op = f[1]
+ if len(f) > 2:
+ value = f[2]
+
+ # backwards compatible
+ if isinstance(field, dict):
+ self.field = field.get('field')
+ self.op = field.get('op') or '='
+ self.value = field.get('value')
+
+ # passed a term
+ elif isinstance(field,Term):
+ self.field = field.field
+ self.op = field.op
+ self.value = field.value
+
+ # a string expression (or just the field)
+ elif isinstance(field,basestring):
+
+ # is a term is passed
+ s = self._search.match(field)
+ if s is not None:
+ self.field = s.group('field')
+ self.op = s.group('op')
+ self.value = s.group('value')
+
+ else:
+ self.field = field
+
+ # is an op passed?
+ if isinstance(op, basestring) and op in self._ops:
+ self.op = op
+ self.value = value
+ else:
+ self.op = '='
+ self.value = op
+
+ else:
+ raise Exception("Term does not understand the supplied field [%s]" % field)
+
+ # we have valid fields
+ if self.field is None or self.op is None or self.value is None:
+ raise Exception("Could not create this term [%s]" % str(self))
+
+ # valid field name
+ if self.field in self._index:
+ self.field = 'index'
+ elif self.field in self._column:
+ self.field = 'column'
+ else:
+ raise Exception("field is not a valid index/column for this term [%s]" % str(self))
+
+ # we have valid conditions
+ if self.op in ['>','>=','<','<=']:
+ if hasattr(self.value,'__iter__') and len(self.value) > 1:
+ raise Exception("an inequality condition cannot have multiple values [%s]" % str(self))
+
+ if not hasattr(self.value,'__iter__'):
+ self.value = [ self.value ]
+
+ self.eval()
+
+ def __str__(self):
+ return "field->%s,op->%s,value->%s" % (self.field,self.op,self.value)
+
+ __repr__ = __str__
+
+ @property
+ def is_in_table(self):
+ """ return True if this is a valid column name for generation (e.g. an actual column in the table) """
+ return self.field in self.kinds
+
+ @property
+ def kind(self):
+ """ the kind of my field """
+ return self.kinds.get(self.field)
+
+ def eval(self):
+ """ set the numexpr expression for this term """
+
+ # convert values
+ values = [ self.convert_value(v) for v in self.value ]
+
+ # equality conditions
+ if self.op in ['=','!=']:
+
+ if self.is_in_table:
+
+ # too many values to create the expression?
+ if len(values) <= 61:
+ self.condition = "(%s)" % ' | '.join([ "(%s == %s)" % (self.field,v[0]) for v in values])
+
+ # use a filter after reading
+ else:
+ self.filter = set([ v[1] for v in values ])
+
+ else:
+
+ self.filter = set([ v[1] for v in values ])
+
+ else:
+
+ if self.is_in_table:
+
+ self.condition = '(%s %s %s)' % (self.field, self.op, values[0][0])
+
+ def convert_value(self, v):
+
+ if self.field == 'index':
+ if self.kind == 'datetime64' :
+ return [lib.Timestamp(v).value, None]
+ elif isinstance(v, datetime):
+ return [time.mktime(v.timetuple()), None]
+ elif not isinstance(v, basestring):
+ return [str(v), None]
+
+ # string quoting
+ return ["'" + v + "'", v]
+
class Selection(object):
"""
Carries out a selection operation on a tables.Table object.
Parameters
----------
- table : tables.Table
- where : list of dicts of the following form
-
- Comparison op
- {'field' : 'index',
- 'op' : '>=',
- 'value' : value}
-
- Match single value
- {'field' : 'index',
- 'value' : v1}
+ table : a Table object
+ where : list of Terms (or convertable to)
- Match a set of values
- {'field' : 'index',
- 'value' : [v1, v2, v3]}
"""
- def __init__(self, table, where=None, index_kind=None):
- self.table = table
- self.where = where
- self.index_kind = index_kind
- self.column_filter = None
- self.the_condition = None
- self.conditions = []
- self.values = None
- if where:
- self.generate(where)
+ def __init__(self, table, where=None):
+ self.table = table
+ self.where = where
+ self.values = None
+ self.condition = None
+ self.filter = None
+ self.terms = self.generate(where)
+
+ # create the numexpr & the filter
+ if self.terms:
+ conds = [ t.condition for t in self.terms if t.condition is not None ]
+ if len(conds):
+ self.condition = "(%s)" % ' & '.join(conds)
+ self.filter = set()
+ for t in self.terms:
+ if t.filter is not None:
+ self.filter |= t.filter
def generate(self, where):
- # and condictions
- for c in where:
- op = c.get('op', None)
- value = c['value']
- field = c['field']
-
- if field == 'index' and self.index_kind == 'datetime64':
- val = lib.Timestamp(value).value
- self.conditions.append('(%s %s %s)' % (field, op, val))
- elif field == 'index' and isinstance(value, datetime):
- value = time.mktime(value.timetuple())
- self.conditions.append('(%s %s %s)' % (field, op, value))
- else:
- self.generate_multiple_conditions(op, value, field)
-
- if len(self.conditions):
- self.the_condition = '(' + ' & '.join(self.conditions) + ')'
-
- def generate_multiple_conditions(self, op, value, field):
+ """ where can be a : dict,list,tuple,string """
+ if where is None: return None
- if op and op == 'in' or isinstance(value, (list, np.ndarray)):
- if len(value) <= 61:
- l = '(' + ' | '.join([ "(%s == '%s')" % (field, v)
- for v in value]) + ')'
- self.conditions.append(l)
- else:
- self.column_filter = set(value)
+ if not isinstance(where, (list,tuple)):
+ where = [ where ]
else:
- if op is None:
- op = '=='
- self.conditions.append('(%s %s "%s")' % (field, op, value))
+ # do we have all list/tuple
+ if not any([ isinstance(w, (list,tuple,Term)) for w in where ]):
+ where = [ where ]
+
+ return [ Term(c, kinds = self.table.kinds_map()) for c in where ]
def select(self):
"""
generate the selection
"""
- if self.the_condition:
- self.values = self.table.readWhere(self.the_condition)
-
+ if self.condition is not None:
+ self.values = self.table.table.readWhere(self.condition)
else:
- self.values = self.table.read()
+ self.values = self.table.table.read()
def select_coords(self):
"""
generate the selection
"""
- self.values = self.table.getWhereList(self.the_condition)
+ self.values = self.table.table.getWhereList(self.condition)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/legacy_table.h5 b/pandas/io/tests/legacy_table.h5
new file mode 100644
index 0000000000000..1c90382d9125c
Binary files /dev/null and b/pandas/io/tests/legacy_table.h5 differ
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index afd05610e3427..0f7da8e827615 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -8,10 +8,11 @@
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, Index)
-from pandas.io.pytables import HDFStore, get_store
+from pandas.io.pytables import HDFStore, get_store, Term
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
+from pandas import concat
try:
import tables
@@ -64,7 +65,9 @@ def test_repr(self):
self.store['b'] = tm.makeStringSeries()
self.store['c'] = tm.makeDataFrame()
self.store['d'] = tm.makePanel()
+ self.store.append('e', tm.makePanel())
repr(self.store)
+ str(self.store)
def test_contains(self):
self.store['a'] = tm.makeTimeSeries()
@@ -139,10 +142,69 @@ def test_put_integer(self):
self._check_roundtrip(df, tm.assert_frame_equal)
def test_append(self):
+ pth = '__test_append__.h5'
+
+ try:
+ store = HDFStore(pth)
+
+ df = tm.makeTimeDataFrame()
+ store.append('df1', df[:10])
+ store.append('df1', df[10:])
+ tm.assert_frame_equal(store['df1'], df)
+
+ store.put('df2', df[:10], table=True)
+ store.append('df2', df[10:])
+ tm.assert_frame_equal(store['df2'], df)
+
+ wp = tm.makePanel()
+ store.append('wp1', wp.ix[:,:10,:])
+ store.append('wp1', wp.ix[:,10:,:])
+ tm.assert_panel_equal(store['wp1'], wp)
+
+ except:
+ raise
+ finally:
+ store.close()
+ os.remove(pth)
+
+ def test_append_with_strings(self):
+ wp = tm.makePanel()
+ wp2 = wp.rename_axis(dict([ (x,"%s_extra" % x) for x in wp.minor_axis ]), axis = 2)
+
+ self.store.append('s1', wp, min_itemsize = 20)
+ self.store.append('s1', wp2)
+ expected = concat([ wp, wp2], axis = 2)
+ expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
+ tm.assert_panel_equal(self.store['s1'], expected)
+
+ # test truncation of bigger strings
+ self.store.append('s2', wp)
+ self.assertRaises(Exception, self.store.append, 's2', wp2)
+
+ def test_create_table_index(self):
+ wp = tm.makePanel()
+ self.store.append('p5', wp)
+ self.store.create_table_index('p5')
+
+ assert(self.store.handle.root.p5.table.cols.index.is_indexed == True)
+ assert(self.store.handle.root.p5.table.cols.column.is_indexed == False)
+
df = tm.makeTimeDataFrame()
- self.store.put('c', df[:10], table=True)
- self.store.append('c', df[10:])
- tm.assert_frame_equal(self.store['c'], df)
+ self.store.append('f', df[:10])
+ self.store.append('f', df[10:])
+ self.store.create_table_index('f')
+
+ # create twice
+ self.store.create_table_index('f')
+
+ # try to index a non-table
+ self.store.put('f2', df)
+ self.assertRaises(Exception, self.store.create_table_index, 'f2')
+
+ # try to change the version supports flag
+ from pandas.io import pytables
+ pytables._table_supports_index = False
+ self.assertRaises(Exception, self.store.create_table_index, 'f')
def test_append_diff_item_order(self):
wp = tm.makePanel()
@@ -153,7 +215,7 @@ def test_append_diff_item_order(self):
self.assertRaises(Exception, self.store.put, 'panel', wp2,
append=True)
- def test_append_incompatible_dtypes(self):
+ def test_table_index_incompatible_dtypes(self):
df1 = DataFrame({'a': [1, 2, 3]})
df2 = DataFrame({'a': [4, 5, 6]},
index=date_range('1/1/2000', periods=3))
@@ -162,6 +224,51 @@ def test_append_incompatible_dtypes(self):
self.assertRaises(Exception, self.store.put, 'frame', df2,
table=True, append=True)
+ def test_table_values_dtypes_roundtrip(self):
+ df1 = DataFrame({'a': [1, 2, 3]}, dtype = 'f8')
+ self.store.append('df1', df1)
+ assert df1.dtypes == self.store['df1'].dtypes
+
+ df2 = DataFrame({'a': [1, 2, 3]}, dtype = 'i8')
+ self.store.append('df2', df2)
+ assert df2.dtypes == self.store['df2'].dtypes
+
+ # incompatible dtype
+ self.assertRaises(Exception, self.store.append, 'df2', df1)
+
+ def test_table_mixed_dtypes(self):
+
+ # frame
+ def _make_one_df():
+ df = tm.makeDataFrame()
+ df['obj1'] = 'foo'
+ df['obj2'] = 'bar'
+ df['bool1'] = df['A'] > 0
+ df['bool2'] = df['B'] > 0
+ df['int1'] = 1
+ df['int2'] = 2
+ return df.consolidate()
+
+ df1 = _make_one_df()
+
+ self.store.append('df1_mixed', df1)
+ tm.assert_frame_equal(self.store.select('df1_mixed'), df1)
+
+ # panel
+ def _make_one_panel():
+ wp = tm.makePanel()
+ wp['obj1'] = 'foo'
+ wp['obj2'] = 'bar'
+ wp['bool1'] = wp['ItemA'] > 0
+ wp['bool2'] = wp['ItemB'] > 0
+ wp['int1'] = 1
+ wp['int2'] = 2
+ return wp.consolidate()
+ p1 = _make_one_panel()
+
+ self.store.append('p1_mixed', p1)
+ tm.assert_panel_equal(self.store.select('p1_mixed'), p1)
+
def test_remove(self):
ts = tm.makeTimeSeries()
df = tm.makeDataFrame()
@@ -174,34 +281,116 @@ def test_remove(self):
self.store.remove('b')
self.assertEquals(len(self.store), 0)
- def test_remove_where_not_exist(self):
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : 'foo'
- }
+ # __delitem__
+ self.store['a'] = ts
+ self.store['b'] = df
+ del self.store['a']
+ del self.store['b']
+ self.assertEquals(len(self.store), 0)
+
+ def test_remove_where(self):
+
+ # non-existance
+ crit1 = Term('index','>','foo')
self.store.remove('a', where=[crit1])
+ # try to remove non-table (with crit)
+ # non-table ok (where = None)
+ wp = tm.makePanel()
+ self.store.put('wp', wp, table=True)
+ self.store.remove('wp', [('column', ['A', 'D'])])
+ rs = self.store.select('wp')
+ expected = wp.reindex(minor_axis = ['B','C'])
+ tm.assert_panel_equal(rs,expected)
+
+ # selectin non-table with a where
+ self.store.put('wp2', wp, table=False)
+ self.assertRaises(Exception, self.store.remove,
+ 'wp2', [('column', ['A', 'D'])])
+
+
def test_remove_crit(self):
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = Term('index','>',date)
+ crit2 = Term('column',['A', 'D'])
self.store.remove('wp', where=[crit1])
self.store.remove('wp', where=[crit2])
result = self.store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
tm.assert_panel_equal(result, expected)
+ # test non-consecutive row removal
+ wp = tm.makePanel()
+ self.store.put('wp2', wp, table=True)
+
+ date1 = wp.major_axis[1:3]
+ date2 = wp.major_axis[5]
+ date3 = [wp.major_axis[7],wp.major_axis[9]]
+
+ crit1 = Term('index',date1)
+ crit2 = Term('index',date2)
+ crit3 = Term('index',date3)
+
+ self.store.remove('wp2', where=[crit1])
+ self.store.remove('wp2', where=[crit2])
+ self.store.remove('wp2', where=[crit3])
+ result = self.store['wp2']
+
+ ma = list(wp.major_axis)
+ for d in date1:
+ ma.remove(d)
+ ma.remove(date2)
+ for d in date3:
+ ma.remove(d)
+ expected = wp.reindex(major = ma)
+ tm.assert_panel_equal(result, expected)
+
+ def test_terms(self):
+
+ wp = tm.makePanel()
+ self.store.put('wp', wp, table=True)
+
+ # some invalid terms
+ terms = [
+ [ 'minor', ['A','B'] ],
+ [ 'index', ['20121114'] ],
+ [ 'index', ['20121114', '20121114'] ],
+ ]
+ for t in terms:
+ self.assertRaises(Exception, self.store.select, 'wp', t)
+
+ self.assertRaises(Exception, Term.__init__)
+ self.assertRaises(Exception, Term.__init__, 'blah')
+ self.assertRaises(Exception, Term.__init__, 'index')
+ self.assertRaises(Exception, Term.__init__, 'index', '==')
+ self.assertRaises(Exception, Term.__init__, 'index', '>', 5)
+
+ result = self.store.select('wp',[ Term('major_axis<20000108'), Term('minor_axis', '=', ['A','B']) ])
+ expected = wp.truncate(after='20000108').reindex(minor=['A', 'B'])
+ tm.assert_panel_equal(result, expected)
+
+ # valid terms
+ terms = [
+ dict(field = 'index', op = '>', value = '20121114'),
+ ('index', '20121114'),
+ ('index', '>', '20121114'),
+ (('index', ['20121114','20121114']),),
+ ('index', datetime(2012,11,14)),
+ 'index>20121114',
+ 'major>20121114',
+ 'major_axis>20121114',
+ (('minor', ['A','B']),),
+ (('minor_axis', ['A','B']),),
+ ((('minor_axis', ['A','B']),),),
+ (('column', ['A','B']),),
+ ]
+
+ for t in terms:
+ self.store.select('wp', t)
+
def test_series(self):
s = tm.makeStringSeries()
self._check_roundtrip(s, tm.assert_series_equal)
@@ -461,10 +650,6 @@ def _make_one():
self.store['obj'] = df2
tm.assert_frame_equal(self.store['obj'], df2)
- # storing in Table not yet supported
- self.assertRaises(Exception, self.store.put, 'foo',
- df1, table=True)
-
# check that can store Series of all of these types
self._check_roundtrip(df1['obj1'], tm.assert_series_equal)
self._check_roundtrip(df1['bool1'], tm.assert_series_equal)
@@ -521,43 +706,45 @@ def test_overwrite_node(self):
tm.assert_series_equal(self.store['a'], ts)
+ def test_select(self):
+ wp = tm.makePanel()
+
+ # put/select ok
+ self.store.put('wp', wp, table=True)
+ self.store.select('wp')
+
+ # non-table ok (where = None)
+ self.store.put('wp2', wp, table=False)
+ self.store.select('wp2')
+
+ # selectin non-table with a where
+ self.assertRaises(Exception, self.store.select,
+ 'wp2', ('column', ['A', 'D']))
+
def test_panel_select(self):
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column', '=', ['A', 'D'])
result = self.store.select('wp', [crit1, crit2])
expected = wp.truncate(before=date).reindex(minor=['A', 'D'])
tm.assert_panel_equal(result, expected)
+ result = self.store.select('wp', [ 'major_axis>=20000124', ('minor_axis', '=', ['A','B']) ])
+ expected = wp.truncate(before='20000124').reindex(minor=['A', 'B'])
+ tm.assert_panel_equal(result, expected)
+
def test_frame_select(self):
df = tm.makeTimeDataFrame()
self.store.put('frame', df, table=True)
date = df.index[len(df) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
- crit3 = {
- 'field' : 'column',
- 'value' : 'A'
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column',['A', 'D'])
+ crit3 = ('column','A')
result = self.store.select('frame', [crit1, crit2])
expected = df.ix[date:, ['A', 'D']]
@@ -578,10 +765,7 @@ def test_select_filter_corner(self):
df.columns = ['%.3d' % c for c in df.columns]
self.store.put('frame', df, table=True)
- crit = {
- 'field' : 'column',
- 'value' : df.columns[:75]
- }
+ crit = Term('column', df.columns[:75])
result = self.store.select('frame', [crit])
tm.assert_frame_equal(result, df.ix[:, df.columns[:75]])
@@ -641,6 +825,15 @@ def test_legacy_read(self):
store['d']
store.close()
+ def test_legacy_table_read(self):
+ # legacy table types
+ pth = curpath()
+ store = HDFStore(os.path.join(pth, 'legacy_table.h5'), 'r')
+ store.select('df1')
+ store.select('df2')
+ store.select('wp1')
+ store.close()
+
def test_store_datetime_fractional_secs(self):
dt = datetime(2012, 1, 2, 3, 4, 5, 123456)
series = Series([0], [dt])
diff --git a/pandas/src/pytables.pyx b/pandas/src/pytables.pyx
new file mode 100644
index 0000000000000..b4dc4f5995f71
--- /dev/null
+++ b/pandas/src/pytables.pyx
@@ -0,0 +1,97 @@
+### pytables extensions ###
+
+from numpy cimport ndarray, int32_t, float64_t, int64_t
+cimport numpy as np
+
+cimport cython
+
+import numpy as np
+import operator
+import sys
+
+np.import_array()
+np.import_ufunc()
+
+
+from cpython cimport (PyDict_New, PyDict_GetItem, PyDict_SetItem,
+ PyDict_Contains, PyDict_Keys,
+ Py_INCREF, PyTuple_SET_ITEM,
+ PyTuple_SetItem,
+ PyTuple_New,
+ PyObject_SetAttrString)
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def create_hdf_rows_2d(ndarray index, ndarray[np.uint8_t, ndim=1] mask, list values):
+ """ return a list of objects ready to be converted to rec-array format """
+
+ cdef:
+ unsigned int i, b, n_index, n_blocks, tup_size
+ ndarray v
+ list l
+ object tup, val
+
+ n_index = index.shape[0]
+ n_blocks = len(values)
+ tup_size = n_blocks+1
+ l = []
+ for i from 0 <= i < n_index:
+
+ if not mask[i]:
+
+ tup = PyTuple_New(tup_size)
+ val = index[i]
+ PyTuple_SET_ITEM(tup, 0, val)
+ Py_INCREF(val)
+
+ for b from 0 <= b < n_blocks:
+
+ v = values[b][:, i]
+ PyTuple_SET_ITEM(tup, b+1, v)
+ Py_INCREF(v)
+
+ l.append(tup)
+
+ return l
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def create_hdf_rows_3d(ndarray index, ndarray columns, ndarray[np.uint8_t, ndim=2] mask, list values):
+ """ return a list of objects ready to be converted to rec-array format """
+
+ cdef:
+ unsigned int i, j, n_columns, n_index, n_blocks, tup_size
+ ndarray v
+ list l
+ object tup, val
+
+ n_index = index.shape[0]
+ n_columns = columns.shape[0]
+ n_blocks = len(values)
+ tup_size = n_blocks+2
+ l = []
+ for i from 0 <= i < n_index:
+
+ for c from 0 <= c < n_columns:
+
+ if not mask[i, c]:
+
+ tup = PyTuple_New(tup_size)
+
+ val = columns[c]
+ PyTuple_SET_ITEM(tup, 0, val)
+ Py_INCREF(val)
+
+ val = index[i]
+ PyTuple_SET_ITEM(tup, 1, val)
+ Py_INCREF(val)
+
+ for b from 0 <= b < n_blocks:
+
+ v = values[b][:, i, c]
+ PyTuple_SET_ITEM(tup, b+2, v)
+ Py_INCREF(v)
+
+ l.append(tup)
+
+ return l
diff --git a/setup.py b/setup.py
index e31659b3ee15f..ca152588b9554 100755
--- a/setup.py
+++ b/setup.py
@@ -620,6 +620,11 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
sources=[srcpath('sandbox', suffix=suffix)],
include_dirs=common_include)
+pytables_ext = Extension('pandas._pytables',
+ sources=[srcpath('pytables', suffix=suffix)],
+ include_dirs=[np.get_include()],
+ libraries=libraries)
+
cppsandbox_ext = Extension('pandas._cppsandbox',
language='c++',
sources=[srcpath('cppsandbox', suffix=suffix)],
@@ -629,6 +634,7 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
lib_ext,
period_ext,
sparse_ext,
+ pytables_ext,
parser_ext]
# if not ISRELEASED:
| Refactor of PyTables support to allow multiple table types.
This commit allows for support of multiple table types in a pytables hdf file,
supporting the existing infrastructure in a backwards compatible manner (LegacyTable)
while extending to a slightly modified format to support AppendableTables and future support of WORMTables
AppendableTables are implementations of the current table format with two enhancements:
- mixed dtype support
- writing routines in cython for enhanced performance
WORMTables (not implemented - but pretty straightforward)
these tables can support a fixed 'table' (meaning not-appendable), that is searchable via queries
this would have greatly enhanced write performance compared with AppendableTables, and a similar read performance profile
In addition, the tables allow for arbitrary axes to be indexed (e.g. you could save a panel that allows indexing on major_axis,minor_axis AND items),
so all dimensions are queryable (currently only major/minor axes allow this query)
all tests pass (with 1 exception)
a frame table round-trip - will fail on a comparison of a sorted index of the frame vs the index of the table (which is as written), not sure why this should be the case?
| https://api.github.com/repos/pandas-dev/pandas/pulls/2371 | 2012-11-27T21:30:00Z | 2012-11-29T00:40:18Z | 2012-11-29T00:40:18Z | 2012-11-29T16:47:29Z |
Excelfancy | diff --git a/pandas/core/format.py b/pandas/core/format.py
index d13cee0b24da2..db50955c13c3e 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -17,6 +17,9 @@
import numpy as np
+import itertools
+
+
docstring_to_string = """
Parameters
----------
@@ -400,6 +403,7 @@ def _get_column_name_list(self):
names.append('' if columns.name is None else columns.name)
return names
+
class HTMLFormatter(object):
indent_delta = 2
@@ -674,6 +678,217 @@ def grouper(x):
return result
+
+#from collections import namedtuple
+# ExcelCell = namedtuple("ExcelCell",
+# 'row, col, val, style, mergestart, mergeend')
+
+class ExcelCell:
+ __fields__ = ('row', 'col', 'val', 'style', 'mergestart', 'mergeend')
+ __slots__ = __fields__
+
+ def __init__(self, row, col, val,
+ style=None, mergestart=None, mergeend=None):
+ self.row = row
+ self.col = col
+ self.val = val
+ self.style = style
+ self.mergestart = mergestart
+ self.mergeend = mergeend
+
+
+header_style = {"font": {"bold": True},
+ "borders": {"top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin"},
+ "alignment": {"horizontal": "center"}}
+
+
+class ExcelFormatter(object):
+ """
+ Class for formatting a DataFrame to a list of ExcelCells,
+
+ Parameters
+ ----------
+ df : dataframe
+ na_rep: na representation
+ float_format : string, default None
+ Format string for floating point numbers
+ cols : sequence, optional
+ Columns to write
+ header : boolean or list of string, default True
+ Write out column names. If a list of string is given it is
+ assumed to be aliases for the column names
+ index : boolean, default True
+ output row names (index)
+ index_label : string or sequence, default None
+ Column label for index column(s) if desired. If None is given, and
+ `header` and `index` are True, then the index names are used. A
+ sequence should be given if the DataFrame uses MultiIndex.
+ """
+
+ def __init__(self,
+ df,
+ na_rep='',
+ float_format=None,
+ cols=None,
+ header=True,
+ index=True,
+ index_label=None
+ ):
+ self.df = df
+ self.rowcounter = 0
+ self.na_rep = na_rep
+ self.columns = cols
+ if cols is None:
+ self.columns = df.columns
+ self.float_format = float_format
+ self.index = index
+ self.index_label = index_label
+ self.header = header
+
+ def _format_value(self, val):
+ if lib.checknull(val):
+ val = self.na_rep
+ if self.float_format is not None and com.is_float(val):
+ val = float(self.float_format % val)
+ return val
+
+ def _format_header_mi(self):
+ levels = self.columns.format(sparsify=True, adjoin=False,
+ names=False)
+ level_lenghts = _get_level_lengths(levels)
+ coloffset = 0
+ if isinstance(self.df.index, MultiIndex):
+ coloffset = len(self.df.index[0]) - 1
+
+ for lnum, (records, values) in enumerate(zip(level_lenghts,
+ levels)):
+ name = self.columns.names[lnum]
+ yield ExcelCell(lnum, coloffset, name, header_style)
+ for i in records:
+ if records[i] > 1:
+ yield ExcelCell(lnum,coloffset + i + 1, values[i],
+ header_style, lnum, coloffset + i + records[i])
+ else:
+ yield ExcelCell(lnum, coloffset + i + 1, values[i], header_style)
+
+ self.rowcounter = lnum
+
+ def _format_header_regular(self):
+ has_aliases = isinstance(self.header, (tuple, list, np.ndarray))
+ if has_aliases or self.header:
+ coloffset = 0
+ if self.index:
+ coloffset = 1
+ if isinstance(self.df.index, MultiIndex):
+ coloffset = len(self.df.index[0])
+
+ colnames = self.columns
+ if has_aliases:
+ if len(self.header) != len(self.columns):
+ raise ValueError(('Writing %d cols but got %d aliases'
+ % (len(self.columns), len(self.header))))
+ else:
+ colnames = self.header
+
+ for colindex, colname in enumerate(colnames):
+ yield ExcelCell(self.rowcounter, colindex + coloffset, colname,
+ header_style)
+
+ def _format_header(self):
+ if isinstance(self.columns, MultiIndex):
+ gen = self._format_header_mi()
+ else:
+ gen = self._format_header_regular()
+
+ gen2 = ()
+ if self.df.index.names:
+ row = [x if x is not None else ''
+ for x in self.df.index.names] + [''] * len(self.columns)
+ if reduce(lambda x, y: x and y, map(lambda x: x != '', row)):
+ gen2 = (ExcelCell(self.rowcounter, colindex, val, header_style)
+ for colindex, val in enumerate(row))
+ self.rowcounter += 1
+ return itertools.chain(gen, gen2)
+
+ def _format_body(self):
+
+ if isinstance(self.df.index, MultiIndex):
+ return self._format_hierarchical_rows()
+ else:
+ return self._format_regular_rows()
+
+ def _format_regular_rows(self):
+ self.rowcounter += 1
+
+ coloffset = 0
+ #output index and index_label?
+ if self.index:
+ #chek aliases
+ #if list only take first as this is not a MultiIndex
+ if self.index_label and isinstance(self.index_label,
+ (list, tuple, np.ndarray)):
+ index_label = self.index_label[0]
+ #if string good to go
+ elif self.index_label and isinstance(self.index_label, str):
+ index_label = self.index_label
+ else:
+ index_label = self.df.index.names[0]
+
+ if index_label:
+ yield ExcelCell(self.rowcounter, 0,
+ index_label, header_style)
+ self.rowcounter += 1
+
+ #write index_values
+ index_values = self.df.index
+ coloffset = 1
+ for idx, idxval in enumerate(index_values):
+ yield ExcelCell(self.rowcounter + idx, 0, idxval, header_style)
+
+ for colidx, colname in enumerate(self.columns):
+ series = self.df[colname]
+ for i, val in enumerate(series):
+ yield ExcelCell(self.rowcounter + i, colidx + coloffset, val)
+
+ def _format_hierarchical_rows(self):
+ self.rowcounter += 1
+
+ gcolidx = 0
+ #output index and index_label?
+ if self.index:
+ index_labels = self.df.index.names
+ #check for aliases
+ if self.index_label and isinstance(self.index_label,
+ (list, tuple, np.ndarray)):
+ index_labels = self.index_label
+
+ #if index labels are not empty go ahead and dump
+ if filter(lambda x: x is not None, index_labels):
+ for cidx, name in enumerate(index_labels):
+ yield ExcelCell(self.rowcounter, cidx,
+ name, header_style)
+ self.rowcounter += 1
+
+ for indexcolvals in zip(*self.df.index):
+ for idx, indexcolval in enumerate(indexcolvals):
+ yield ExcelCell(self.rowcounter + idx, gcolidx,
+ indexcolval, header_style)
+ gcolidx += 1
+
+ for colidx, colname in enumerate(self.columns):
+ series = self.df[colname]
+ for i, val in enumerate(series):
+ yield ExcelCell(self.rowcounter + i, gcolidx + colidx, val)
+
+ def get_formatted_cells(self):
+ for cell in itertools.chain(self._format_header(),
+ self._format_body()):
+ cell.val = self._format_value(cell.val)
+ yield cell
+
#----------------------------------------------------------------------
# Array formatters
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 35d895bed43f1..ebe361a33b28c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1221,7 +1221,7 @@ def to_panel(self):
to_wide = deprecate('to_wide', to_panel)
- def _helper_csvexcel(self, writer, na_rep=None, cols=None,
+ def _helper_csv(self, writer, na_rep=None, cols=None,
header=True, index=True,
index_label=None, float_format=None):
if cols is None:
@@ -1356,7 +1356,7 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
else:
csvout = csv.writer(f, lineterminator='\n', delimiter=sep,
quoting=quoting)
- self._helper_csvexcel(csvout, na_rep=na_rep,
+ self._helper_csv(csvout, na_rep=na_rep,
float_format=float_format, cols=cols,
header=header, index=index,
index_label=index_label)
@@ -1367,7 +1367,7 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
def to_excel(self, excel_writer, sheet_name='sheet1', na_rep='',
float_format=None, cols=None, header=True, index=True,
- index_label=None):
+ index_label=None, startrow=0, startcol=0):
"""
Write DataFrame to a excel sheet
@@ -1392,6 +1392,9 @@ def to_excel(self, excel_writer, sheet_name='sheet1', na_rep='',
Column label for index column(s) if desired. If None is given, and
`header` and `index` are True, then the index names are used. A
sequence should be given if the DataFrame uses MultiIndex.
+ startow : upper left cell row to dump data frame
+ startcol : upper left cell column to dump data frame
+
Notes
-----
@@ -1408,11 +1411,17 @@ def to_excel(self, excel_writer, sheet_name='sheet1', na_rep='',
if isinstance(excel_writer, basestring):
excel_writer = ExcelWriter(excel_writer)
need_save = True
- excel_writer.cur_sheet = sheet_name
- self._helper_csvexcel(excel_writer, na_rep=na_rep,
- float_format=float_format, cols=cols,
- header=header, index=index,
- index_label=index_label)
+
+ formatter = fmt.ExcelFormatter(self,
+ na_rep=na_rep,
+ cols=cols,
+ header=header,
+ float_format=float_format,
+ index=index,
+ index_label=index_label)
+ formatted_cells = formatter.get_formatted_cells()
+ excel_writer.write_cells(formatted_cells, sheet_name,
+ startrow=startrow, startcol=startcol)
if need_save:
excel_writer.save()
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index a5fc7ebeed101..14a01b38ae88e 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -6,6 +6,7 @@
from itertools import izip
from urlparse import urlparse
import csv
+import xlwt
import numpy as np
@@ -20,6 +21,7 @@
import pandas.lib as lib
import pandas._parser as _parser
+from pandas.tseries.period import Period
class DateConversionError(Exception):
pass
@@ -456,6 +458,8 @@ def __init__(self, f, engine='python', **kwds):
# might mutate self.engine
self.options, self.engine = self._clean_options(options, engine)
+ if 'has_index_labels' in kwds:
+ self.options['has_index_labels'] = kwds['has_index_labels']
self._make_engine(self.engine)
@@ -931,6 +935,9 @@ def TextParser(*args, **kwds):
rows will be discarded
index_col : int or list, default None
Column or columns to use as the (possibly hierarchical) index
+ has_index_labels: boolean, default False
+ True if the cols defined in index_col have an index name and are
+ not in the header
na_values : iterable, default None
Custom NA values
keep_default_na : bool, default True
@@ -969,6 +976,10 @@ def TextParser(*args, **kwds):
# verbose=False, encoding=None, squeeze=False):
+def count_empty_vals(vals):
+ return sum([1 for v in vals if v == '' or v is None])
+
+
class PythonParser(ParserBase):
def __init__(self, f, **kwds):
@@ -995,6 +1006,9 @@ def __init__(self, f, **kwds):
self.doublequote = kwds['doublequote']
self.skipinitialspace = kwds['skipinitialspace']
self.quoting = kwds['quoting']
+ self.has_index_labels = False
+ if 'has_index_labels' in kwds:
+ self.has_index_labels = kwds['has_index_labels']
self.verbose = kwds['verbose']
self.converters = kwds['converters']
@@ -1099,6 +1113,13 @@ def read(self, rows=None):
self.index_col,
self.index_names)
+ #handle new style for names in index
+ count_empty_content_vals = count_empty_vals(content[0])
+ indexnamerow = None
+ if self.has_index_labels and count_empty_content_vals == len(columns):
+ indexnamerow = content[0]
+ content = content[1:]
+
alldata = self._rows_to_cols(content)
data = self._exclude_implicit_index(alldata)
@@ -1106,6 +1127,9 @@ def read(self, rows=None):
data = self._convert_data(data)
index = self._make_index(data, alldata, columns)
+ if indexnamerow:
+ coffset = len(indexnamerow) - len(columns)
+ index.names = indexnamerow[:coffset]
return index, columns, data
@@ -1699,7 +1723,7 @@ def __repr__(self):
return object.__repr__(self)
def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
- index_col=None, parse_cols=None, parse_dates=False,
+ index_col=None, has_index_labels=False, parse_cols=None, parse_dates=False,
date_parser=None, na_values=None, thousands=None, chunksize=None,
**kwds):
"""
@@ -1718,6 +1742,9 @@ def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
index_col : int, default None
Column to use as the row labels of the DataFrame. Pass None if
there is no such column
+ has_index_labels: boolean, default False
+ True if the cols defined in index_col have an index name and are
+ not in the header
parse_cols : int or list, default None
If None then parse all columns,
If int then indicates last column to be parsed
@@ -1739,6 +1766,7 @@ def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
False: self._parse_xls}
return choose[self.use_xlsx](sheetname, header=header,
skiprows=skiprows, index_col=index_col,
+ has_index_labels=has_index_labels,
parse_cols=parse_cols,
parse_dates=parse_dates,
date_parser=date_parser,
@@ -1780,7 +1808,7 @@ def _excel2num(x):
return i in parse_cols
def _parse_xlsx(self, sheetname, header=0, skiprows=None,
- skip_footer=0, index_col=None,
+ skip_footer=0, index_col=None, has_index_labels=False,
parse_cols=None, parse_dates=False, date_parser=None,
na_values=None, thousands=None, chunksize=None):
sheet = self.book.get_sheet_by_name(name=sheetname)
@@ -1804,6 +1832,7 @@ def _parse_xlsx(self, sheetname, header=0, skiprows=None,
data[header] = _trim_excel_header(data[header])
parser = TextParser(data, header=header, index_col=index_col,
+ has_index_labels=has_index_labels,
na_values=na_values,
thousands=thousands,
parse_dates=parse_dates,
@@ -1815,7 +1844,7 @@ def _parse_xlsx(self, sheetname, header=0, skiprows=None,
return parser.read()
def _parse_xls(self, sheetname, header=0, skiprows=None,
- skip_footer=0, index_col=None,
+ skip_footer=0, index_col=None, has_index_labels=None,
parse_cols=None, parse_dates=False, date_parser=None,
na_values=None, thousands=None, chunksize=None):
from xlrd import xldate_as_tuple, XL_CELL_DATE, XL_CELL_ERROR
@@ -1849,6 +1878,7 @@ def _parse_xls(self, sheetname, header=0, skiprows=None,
data[header] = _trim_excel_header(data[header])
parser = TextParser(data, header=header, index_col=index_col,
+ has_index_labels=has_index_labels,
na_values=na_values,
thousands=thousands,
parse_dates=parse_dates,
@@ -1869,11 +1899,97 @@ def sheet_names(self):
def _trim_excel_header(row):
# trim header row so auto-index inference works
- while len(row) > 0 and row[0] == '':
+ # xlrd uses '' , openpyxl None
+ while len(row) > 0 and (row[0] == '' or row[0] is None):
row = row[1:]
return row
+class CellStyleConverter(object):
+ """
+ Utility Class which converts a style dict to xlrd or openpyxl style
+ """
+
+ @staticmethod
+ def to_xls(style_dict):
+ """
+ converts a style_dict to an xlwt style object
+ Parameters
+ ----------
+ style_dict: style dictionary to convert
+ """
+ def style_to_xlwt(item, firstlevel=True, field_sep=',', line_sep=';'):
+ """helper wich recursively generate an xlwt easy style string
+ for example:
+
+ hstyle = {"font": {"bold": True},
+ "border": {"top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin"},
+ "align": {"horiz": "center"}}
+ will be converted to
+ font: bold on; \
+ border: top thin, right thin, bottom thin, left thin; \
+ align: horiz center;
+ """
+ if hasattr(item, 'items'):
+ if firstlevel:
+ it = ["%s: %s" % (key, style_to_xlwt(value, False))
+ for key, value in item.items()]
+ out = "%s " % (line_sep).join(it)
+ return out
+ else:
+ it = ["%s %s" % (key, style_to_xlwt(value, False))
+ for key, value in item.items()]
+ out = "%s " % (field_sep).join(it)
+ return out
+ else:
+ item = "%s" % item
+ item = item.replace("True", "on")
+ item = item.replace("False", "off")
+ return item
+
+ if style_dict:
+ xlwt_stylestr = style_to_xlwt(style_dict)
+ return xlwt.easyxf(xlwt_stylestr, field_sep=',', line_sep=';')
+ else:
+ return xlwt.XFStyle()
+
+ @staticmethod
+ def to_xlsx(style_dict):
+ """
+ converts a style_dict to an openpyxl style object
+ Parameters
+ ----------
+ style_dict: style dictionary to convert
+ """
+
+ from openpyxl.style import Style
+ xls_style = Style()
+ for key, value in style_dict.items():
+ for nk, nv in value.items():
+ if key == "borders":
+ (xls_style.borders.__getattribute__(nk)
+ .__setattr__('border_style', nv))
+ else:
+ xls_style.__getattribute__(key).__setattr__(nk, nv)
+
+ return xls_style
+
+
+def _conv_value(val):
+ #convert value for excel dump
+ if isinstance(val, np.int64):
+ val = int(val)
+ elif isinstance(val, np.bool8):
+ val = bool(val)
+ elif isinstance(val, Period):
+ val = "%s" % val
+
+ return val
+
+
class ExcelWriter(object):
"""
Class for writing DataFrame objects into excel sheets, uses xlwt for xls,
@@ -1890,11 +2006,15 @@ def __init__(self, path):
self.use_xlsx = False
import xlwt
self.book = xlwt.Workbook()
- self.fm_datetime = xlwt.easyxf(num_format_str='YYYY-MM-DD HH:MM:SS')
+ self.fm_datetime = xlwt.easyxf(
+ num_format_str='YYYY-MM-DD HH:MM:SS')
self.fm_date = xlwt.easyxf(num_format_str='YYYY-MM-DD')
else:
from openpyxl.workbook import Workbook
- self.book = Workbook(optimized_write=True)
+ self.book = Workbook()#optimized_write=True)
+ #open pyxl 1.6.1 adds a dummy sheet remove it
+ if self.book.worksheets:
+ self.book.remove_sheet(self.book.worksheets[0])
self.path = path
self.sheets = {}
self.cur_sheet = None
@@ -1905,16 +2025,18 @@ def save(self):
"""
self.book.save(self.path)
- def writerow(self, row, sheet_name=None):
+ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0):
"""
- Write the given row into Excel an excel sheet
+ Write given formated cells into Excel an excel sheet
Parameters
----------
- row : list
- Row of data to save to Excel sheet
+ cells : generator
+ cell of formated data to save to Excel sheet
sheet_name : string, default None
Name of Excel sheet, if None, then use self.cur_sheet
+ startrow: upper left cell row to dump data frame
+ startcol: upper left cell column to dump data frame
"""
if sheet_name is None:
sheet_name = self.cur_sheet
@@ -1922,49 +2044,69 @@ def writerow(self, row, sheet_name=None):
raise Exception('Must pass explicit sheet_name or set '
'cur_sheet property')
if self.use_xlsx:
- self._writerow_xlsx(row, sheet_name)
+ self._writecells_xlsx(cells, sheet_name, startrow, startcol)
else:
- self._writerow_xls(row, sheet_name)
+ self._writecells_xls(cells, sheet_name, startrow, startcol)
+
+ def _writecells_xlsx(self, cells, sheet_name, startrow, startcol):
+
+ from openpyxl.cell import get_column_letter
- def _writerow_xls(self, row, sheet_name):
if sheet_name in self.sheets:
- sheet, row_idx = self.sheets[sheet_name]
+ wks = self.sheets[sheet_name]
else:
- sheet = self.book.add_sheet(sheet_name)
- row_idx = 0
- sheetrow = sheet.row(row_idx)
- for i, val in enumerate(row):
- if isinstance(val, (datetime.datetime, datetime.date)):
- if isinstance(val, datetime.datetime):
- sheetrow.write(i, val, self.fm_datetime)
- else:
- sheetrow.write(i, val, self.fm_date)
- elif isinstance(val, np.int64):
- sheetrow.write(i, int(val))
- elif isinstance(val, np.bool8):
- sheetrow.write(i, bool(val))
- else:
- sheetrow.write(i, val)
- row_idx += 1
- if row_idx == 1000:
- sheet.flush_row_data()
- self.sheets[sheet_name] = (sheet, row_idx)
-
- def _writerow_xlsx(self, row, sheet_name):
+ wks = self.book.create_sheet()
+ wks.title = sheet_name
+ self.sheets[sheet_name] = wks
+
+ for cell in cells:
+ colletter = get_column_letter(startcol + cell.col + 1)
+ xcell = wks.cell("%s%s" % (colletter, startrow + cell.row + 1))
+ xcell.value = _conv_value(cell.val)
+ if cell.style:
+ style = CellStyleConverter.to_xlsx(cell.style)
+ for field in style.__fields__:
+ xcell.style.__setattr__(field,
+ style.__getattribute__(field))
+
+ if isinstance(cell.val, datetime.datetime):
+ xcell.style.number_format.format_code = "YYYY-MM-DD HH:MM:SS"
+ elif isinstance(cell.val, datetime.date):
+ xcell.style.number_format.format_code = "YYYY-MM-DD"
+
+ #merging requires openpyxl latest (works on 1.6.1)
+ #todo add version check
+ if cell.mergestart is not None and cell.mergeend is not None:
+ cletterstart = get_column_letter(startcol + cell.col + 1)
+ cletterend = get_column_letter(startcol + cell.mergeend + 1)
+
+ wks.merge_cells('%s%s:%s%s' % (cletterstart,
+ startrow + cell.row + 1,
+ cletterend,
+ startrow + cell.mergestart + 1))
+
+ def _writecells_xls(self, cells, sheet_name, startrow, startcol):
if sheet_name in self.sheets:
- sheet, row_idx = self.sheets[sheet_name]
+ wks = self.sheets[sheet_name]
else:
- sheet = self.book.create_sheet()
- sheet.title = sheet_name
- row_idx = 0
-
- conv_row = []
- for val in row:
- if isinstance(val, np.int64):
- val = int(val)
- elif isinstance(val, np.bool8):
- val = bool(val)
- conv_row.append(val)
- sheet.append(conv_row)
- row_idx += 1
- self.sheets[sheet_name] = (sheet, row_idx)
+ wks = self.book.add_sheet(sheet_name)
+ self.sheets[sheet_name] = wks
+
+ for cell in cells:
+ val = _conv_value(cell.val)
+ style = CellStyleConverter.to_xls(cell.style)
+ if isinstance(val, datetime.datetime):
+ style.num_format_str = "YYYY-MM-DD HH:MM:SS"
+ elif isinstance(val, datetime.date):
+ style.num_format_str = "YYYY-MM-DD"
+
+ if cell.mergestart is not None and cell.mergeend is not None:
+ wks.write_merge(startrow + cell.row,
+ startrow + cell.mergestart,
+ startcol + cell.col,
+ startcol + cell.mergeend,
+ val, style)
+ else:
+ wks.write(startrow + cell.row,
+ startcol + cell.col,
+ val, style)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index b76d9ea1e6052..61456d6dbfe2e 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3842,7 +3842,7 @@ def test_to_excel_from_excel(self):
# test roundtrip
self.frame.to_excel(path,'test1')
reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0)
+ recons = reader.parse('test1', index_col=0, has_index_labels=True)
assert_frame_equal(self.frame, recons)
self.frame.to_excel(path,'test1', index=False)
@@ -3851,19 +3851,19 @@ def test_to_excel_from_excel(self):
recons.index = self.frame.index
assert_frame_equal(self.frame, recons)
- self.frame.to_excel(path,'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, skiprows=[1])
- assert_frame_equal(self.frame.ix[1:], recons)
+ # self.frame.to_excel(path,'test1')
+ # reader = ExcelFile(path)
+ # recons = reader.parse('test1', index_col=0, skiprows=[2], has_index_labels=True)
+ # assert_frame_equal(self.frame.ix[1:], recons)
self.frame.to_excel(path,'test1',na_rep='NA')
reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0, na_values=['NA'])
+ recons = reader.parse('test1', index_col=0, na_values=['NA'], has_index_labels=True)
assert_frame_equal(self.frame, recons)
self.mixed_frame.to_excel(path,'test1')
reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=0)
+ recons = reader.parse('test1', index_col=0, has_index_labels=True)
assert_frame_equal(self.mixed_frame, recons)
self.tsframe.to_excel(path, 'test1')
@@ -3891,7 +3891,7 @@ def test_to_excel_from_excel(self):
self.tsframe.to_excel(writer,'test2')
writer.save()
reader = ExcelFile(path)
- recons = reader.parse('test1',index_col=0)
+ recons = reader.parse('test1',index_col=0, has_index_labels=True)
assert_frame_equal(self.frame, recons)
recons = reader.parse('test2',index_col=0)
assert_frame_equal(self.tsframe, recons)
@@ -3903,11 +3903,46 @@ def test_to_excel_from_excel(self):
col_aliases = Index(['AA', 'X', 'Y', 'Z'])
self.frame2.to_excel(path, 'test1', header=col_aliases)
reader = ExcelFile(path)
- rs = reader.parse('test1', index_col=0)
+ rs = reader.parse('test1', index_col=0, has_index_labels=True)
xp = self.frame2.copy()
xp.columns = col_aliases
assert_frame_equal(xp, rs)
+ # test index_label
+ frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame.to_excel(path, 'test1', index_label=['test'])
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_labels=True).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame.to_excel(path, 'test1', index_label=['test', 'dummy', 'dummy2'])
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_labels=True).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ frame = (DataFrame(np.random.randn(10,2)) >= 0)
+ frame.to_excel(path, 'test1', index_label='test')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0, has_index_labels=True).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ #test index_labels in same row as column names
+ self.frame.to_excel('/tmp/tests.xls', 'test1', cols=['A', 'B', 'C', 'D'], index=False)
+ #take 'A' and 'B' as indexes (they are in same row as cols 'C', 'D')
+ df = self.frame.copy()
+ df = df.set_index(['A', 'B'])
+
+
+ reader = ExcelFile('/tmp/tests.xls')
+ recons = reader.parse('test1', index_col=[0, 1])
+ assert_frame_equal(df, recons)
+
+
+
os.remove(path)
# datetime.date, not sure what to test here exactly
@@ -3971,7 +4006,7 @@ def test_to_excel_multiindex(self):
# round trip
frame.to_excel(path, 'test1')
reader = ExcelFile(path)
- df = reader.parse('test1', index_col=[0,1], parse_dates=False)
+ df = reader.parse('test1', index_col=[0,1], parse_dates=False, has_index_labels=True)
assert_frame_equal(frame, df)
self.assertEqual(frame.index.names, df.index.names)
self.frame.index = old_index # needed if setUP becomes a classmethod
@@ -3984,7 +4019,7 @@ def test_to_excel_multiindex(self):
tsframe.to_excel(path, 'test1', index_label = ['time','foo'])
reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=[0,1])
+ recons = reader.parse('test1', index_col=[0,1], has_index_labels=True)
assert_frame_equal(tsframe, recons)
# infer index
@@ -3993,22 +4028,28 @@ def test_to_excel_multiindex(self):
recons = reader.parse('test1')
assert_frame_equal(tsframe, recons)
- # no index
- tsframe.index.names = ['first', 'second']
- tsframe.to_excel(path, 'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1')
- assert_almost_equal(tsframe.values,
- recons.ix[:, tsframe.columns].values)
- self.assertEqual(len(tsframe.columns) + 2, len(recons.columns))
-
- tsframe.index.names = [None, None]
# no index
- tsframe.to_excel(path, 'test1', index=False)
- reader = ExcelFile(path)
- recons = reader.parse('test1', index_col=None)
- assert_almost_equal(recons.values, self.tsframe.values)
+ #TODO : mention this does not make sence anymore
+ #with the new formatting as we are not alligning colnames and indexlabels
+ #on the same row
+
+ # tsframe.index.names = ['first', 'second']
+ # tsframe.to_excel(path, 'test1')
+ # reader = ExcelFile(path)
+ # recons = reader.parse('test1')
+ # assert_almost_equal(tsframe.values,
+ # recons.ix[:, tsframe.columns].values)
+ # self.assertEqual(len(tsframe.columns) + 2, len(recons.columns))
+
+ # tsframe.index.names = [None, None]
+
+ # # no index
+ # tsframe.to_excel(path, 'test1', index=False)
+ # reader = ExcelFile(path)
+ # recons = reader.parse('test1', index_col=None)
+ # assert_almost_equal(recons.values, self.tsframe.values)
+
self.tsframe.index = old_index # needed if setUP becomes classmethod
# write a big DataFrame
@@ -4071,6 +4112,125 @@ def test_to_excel_unicode_filename(self):
assert_frame_equal(rs, xp)
os.remove(filename)
+ def test_to_excel_styleconverter(self):
+ from pandas.io.parsers import CellStyleConverter
+ try:
+ import xlwt
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ hstyle = {"font": {"bold": True},
+ "borders": {"top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin"},
+ "alignment": {"horizontal": "center"}}
+ xls_style = CellStyleConverter.to_xls(hstyle)
+ self.assertTrue(xls_style.font.bold)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.top)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.right)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.bottom)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.left)
+ self.assertEquals(xlwt.Alignment.HORZ_CENTER, xls_style.alignment.horz)
+
+ xlsx_style = CellStyleConverter.to_xlsx(hstyle)
+ self.assertTrue(xlsx_style.font.bold)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.top.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.right.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.bottom.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.left.border_style)
+ self.assertEquals(openpyxl.style.Alignment.HORIZONTAL_CENTER,
+ xlsx_style.alignment.horizontal)
+
+ def test_to_excel_header_styling(self):
+
+ import StringIO
+ s = StringIO.StringIO(
+ """Date,ticker,type,value
+ 2001-01-01,x,close,12.2
+ 2001-01-01,x,open ,12.1
+ 2001-01-01,y,close,12.2
+ 2001-01-01,y,open ,12.1
+ 2001-02-01,x,close,12.2
+ 2001-02-01,x,open ,12.1
+ 2001-02-01,y,close,12.2
+ 2001-02-01,y,open ,12.1
+ 2001-03-01,x,close,12.2
+ 2001-03-01,x,open ,12.1
+ 2001-03-01,y,close,12.2
+ 2001-03-01,y,open ,12.1""")
+ df = read_csv(s, parse_dates=["Date"])
+ pdf = df.pivot_table(values="value", rows=["ticker"],
+ cols=["Date", "type"])
+
+ try:
+ import xlrd
+ import openpyxl
+ from openpyxl.cell import get_column_letter
+ except ImportError:
+ raise nose.SkipTest
+
+ filename = '__tmp__.xls'
+ pdf.to_excel(filename, 'test1')
+
+
+ wbk = xlrd.open_workbook(filename,
+ formatting_info=True)
+ self.assertEquals(["test1"], wbk.sheet_names())
+ ws = wbk.sheet_by_name('test1')
+ self.assertEquals([(0, 1, 5, 7), (0, 1, 3, 5), (0, 1, 1, 3)],
+ ws.merged_cells)
+ for i in range(0, 2):
+ for j in range(0, 7):
+ xfx = ws.cell_xf_index(0, 0)
+ cell_xf = wbk.xf_list[xfx]
+ font = wbk.font_list
+ self.assertEquals(1, font[cell_xf.font_index].bold)
+ self.assertEquals(1, cell_xf.border.top_line_style)
+ self.assertEquals(1, cell_xf.border.right_line_style)
+ self.assertEquals(1, cell_xf.border.bottom_line_style)
+ self.assertEquals(1, cell_xf.border.left_line_style)
+ self.assertEquals(2, cell_xf.alignment.hor_align)
+
+ os.remove(filename)
+ # test xlsx_styling
+ filename = '__tmp__.xlsx'
+ pdf.to_excel(filename, 'test1')
+
+ wbk = openpyxl.load_workbook(filename)
+ self.assertEquals(["test1"], wbk.get_sheet_names())
+ ws = wbk.get_sheet_by_name('test1')
+
+ xlsaddrs = ["%s2" % chr(i) for i in range(ord('A'), ord('H'))]
+ xlsaddrs += ["A%s" % i for i in range(1, 6)]
+ xlsaddrs += ["B1", "D1", "F1"]
+ for xlsaddr in xlsaddrs:
+ cell = ws.cell(xlsaddr)
+ self.assertTrue(cell.style.font.bold)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ cell.style.borders.top.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ cell.style.borders.right.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ cell.style.borders.bottom.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ cell.style.borders.left.border_style)
+ self.assertEquals(openpyxl.style.Alignment.HORIZONTAL_CENTER,
+ cell.style.alignment.horizontal)
+
+ mergedcells_addrs = ["C1", "E1", "G1"]
+ for maddr in mergedcells_addrs:
+ self.assertTrue(ws.cell(maddr).merged)
+
+ os.remove(filename)
+
+
+
def test_info(self):
io = StringIO()
self.frame.info(buf=io)
| adds to export dataframe for excel:
- multiindex (merge cells similar to htmlformatter)
- border
- bold header
- ability to add dataframe in same sheet (startrow, startcol)
http://cl.ly/image/2r102L0E1l23
solves Issue #2294
| https://api.github.com/repos/pandas-dev/pandas/pulls/2370 | 2012-11-27T20:23:26Z | 2012-11-29T19:32:13Z | 2012-11-29T19:32:13Z | 2014-06-16T03:26:35Z |
ENH: partial date slicing for day, hour, and minute resolutions #2306 | diff --git a/pandas/src/datetime.pyx b/pandas/src/datetime.pyx
index 44660cd3bb682..bb5ed79cc4b5d 100644
--- a/pandas/src/datetime.pyx
+++ b/pandas/src/datetime.pyx
@@ -1603,3 +1603,100 @@ cpdef normalize_date(object dt):
return datetime(dt.year, dt.month, dt.day)
else:
raise TypeError('Unrecognized type: %s' % type(dt))
+
+cpdef resolution(ndarray[int64_t] stamps, tz=None):
+ cdef:
+ Py_ssize_t i, n = len(stamps)
+ pandas_datetimestruct dts
+ int reso = D_RESO, curr_reso
+
+ if tz is not None:
+ if isinstance(tz, basestring):
+ tz = pytz.timezone(tz)
+ return _reso_local(stamps, tz)
+ else:
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
+ curr_reso = _reso_stamp(&dts)
+ if curr_reso < reso:
+ reso = curr_reso
+ return reso
+
+US_RESO = 0
+S_RESO = 1
+T_RESO = 2
+H_RESO = 3
+D_RESO = 4
+
+cdef inline int _reso_stamp(pandas_datetimestruct *dts):
+ if dts.us != 0:
+ return US_RESO
+ elif dts.sec != 0:
+ return S_RESO
+ elif dts.min != 0:
+ return T_RESO
+ elif dts.hour != 0:
+ return H_RESO
+ return D_RESO
+
+cdef _reso_local(ndarray[int64_t] stamps, object tz):
+ cdef:
+ Py_ssize_t n = len(stamps)
+ int reso = D_RESO, curr_reso
+ ndarray[int64_t] trans, deltas, pos
+ pandas_datetimestruct dts
+
+ if _is_utc(tz):
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
+ curr_reso = _reso_stamp(&dts)
+ if curr_reso < reso:
+ reso = curr_reso
+ elif _is_tzlocal(tz):
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns,
+ &dts)
+ dt = datetime(dts.year, dts.month, dts.day, dts.hour,
+ dts.min, dts.sec, dts.us, tz)
+ delta = int(total_seconds(_get_utcoffset(tz, dt))) * 1000000000
+ pandas_datetime_to_datetimestruct(stamps[i] + delta,
+ PANDAS_FR_ns, &dts)
+ curr_reso = _reso_stamp(&dts)
+ if curr_reso < reso:
+ reso = curr_reso
+ else:
+ # Adjust datetime64 timestamp, recompute datetimestruct
+ trans = _get_transitions(tz)
+ deltas = _get_deltas(tz)
+ _pos = trans.searchsorted(stamps, side='right') - 1
+ if _pos.dtype != np.int64:
+ _pos = _pos.astype(np.int64)
+ pos = _pos
+
+ # statictzinfo
+ if not hasattr(tz, '_transition_info'):
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i] + deltas[0],
+ PANDAS_FR_ns, &dts)
+ curr_reso = _reso_stamp(&dts)
+ if curr_reso < reso:
+ reso = curr_reso
+ else:
+ for i in range(n):
+ if stamps[i] == NPY_NAT:
+ continue
+ pandas_datetime_to_datetimestruct(stamps[i] + deltas[pos[i]],
+ PANDAS_FR_ns, &dts)
+ curr_reso = _reso_stamp(&dts)
+ if curr_reso < reso:
+ reso = curr_reso
+
+ return reso
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index bc1770d58b0bc..a169992485ff6 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -22,6 +22,24 @@ class FreqGroup(object):
FR_MIN = 8000
FR_SEC = 9000
+class Resolution(object):
+
+ RESO_US = 0
+ RESO_SEC = 1
+ RESO_MIN = 2
+ RESO_HR = 3
+ RESO_DAY = 4
+
+ @classmethod
+ def get_str(cls, reso):
+ return {RESO_US : 'microsecond',
+ RESO_SEC : 'second',
+ RESO_MIN : 'minute',
+ RESO_HR : 'hour',
+ RESO_DAY : 'day'}.get(reso, 'day')
+
+def get_reso_string(reso):
+ return Resolution.get_str(reso)
def get_to_timestamp_base(base):
if base <= FreqGroup.FR_WK:
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 6cbfbfa459308..c6eec268ce52b 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -8,7 +8,8 @@
from pandas.core.common import isnull
from pandas.core.index import Index, Int64Index
-from pandas.tseries.frequencies import infer_freq, to_offset, get_period_alias
+from pandas.tseries.frequencies import (infer_freq, to_offset, get_period_alias,
+ Resolution, get_reso_string)
from pandas.tseries.offsets import DateOffset, generate_range, Tick
from pandas.tseries.tools import parse_time_string, normalize_date
from pandas.util.decorators import cache_readonly
@@ -1006,6 +1007,23 @@ def _partial_date_slice(self, reso, parsed):
d = lib.monthrange(parsed.year, qe)[1] # at end of month
t1 = Timestamp(datetime(parsed.year, parsed.month, 1))
t2 = Timestamp(datetime(parsed.year, qe, d))
+ elif reso == 'day' and self._resolution < Resolution.RESO_DAY:
+ st = datetime(parsed.year, parsed.month, parsed.day)
+ t1 = Timestamp(st)
+ t2 = st + offsets.Day()
+ t2 = Timestamp(Timestamp(t2).value - 1)
+ elif (reso == 'hour' and
+ self._resolution < Resolution.RESO_HR):
+ st = datetime(parsed.year, parsed.month, parsed.day,
+ hour=parsed.hour)
+ t1 = Timestamp(st)
+ t2 = Timestamp(Timestamp(st + offsets.Hour()).value - 1)
+ elif (reso == 'minute' and
+ self._resolution < Resolution.RESO_MIN):
+ st = datetime(parsed.year, parsed.month, parsed.day,
+ hour=parsed.hour, minute=parsed.minute)
+ t1 = Timestamp(st)
+ t2 = Timestamp(Timestamp(st + offsets.Minute()).value - 1)
else:
raise KeyError
@@ -1221,6 +1239,18 @@ def is_normalized(self):
"""
return lib.dates_normalized(self.asi8, self.tz)
+ @cache_readonly
+ def resolution(self):
+ """
+ Returns day, hour, minute, second, or microsecond
+ """
+ reso = self._resolution
+ return get_reso_string(reso)
+
+ @cache_readonly
+ def _resolution(self):
+ return lib.resolution(self.asi8, self.tz)
+
def equals(self, other):
"""
Determines if two Index objects contain the same elements.
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 85b3654bac70a..b1fa5d53895a0 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -1219,4 +1219,3 @@ def period_range(start=None, end=None, periods=None, freq='D', name=None):
"""
return PeriodIndex(start=start, end=end, periods=periods,
freq=freq, name=name)
-
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index ef35c44b53772..cce5093e2f46c 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1833,6 +1833,48 @@ def test_partial_slice(self):
expected = s[:'20060228']
assert_series_equal(result, expected)
+ result = s['2005-1-1']
+ self.assert_(result == s.irow(0))
+
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31')
+
+ def test_partial_slice_daily(self):
+ rng = DatetimeIndex(freq='H', start=datetime(2005,1,31), periods=500)
+ s = Series(np.arange(len(rng)), index=rng)
+
+ result = s['2005-1-31']
+ assert_series_equal(result, s.ix[:24])
+
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31 00')
+
+ def test_partial_slice_hourly(self):
+ rng = DatetimeIndex(freq='T', start=datetime(2005,1,1,20,0,0),
+ periods=500)
+ s = Series(np.arange(len(rng)), index=rng)
+
+ result = s['2005-1-1']
+ assert_series_equal(result, s.ix[:60*4])
+
+ result = s['2005-1-1 20']
+ assert_series_equal(result, s.ix[:60])
+
+ self.assert_(s['2005-1-1 20:00'] == s.ix[0])
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:15')
+
+ def test_partial_slice_minutely(self):
+ rng = DatetimeIndex(freq='S', start=datetime(2005,1,1,23,59,0),
+ periods=500)
+ s = Series(np.arange(len(rng)), index=rng)
+
+ result = s['2005-1-1 23:59']
+ assert_series_equal(result, s.ix[:60])
+
+ result = s['2005-1-1']
+ assert_series_equal(result, s.ix[:60])
+
+ self.assert_(s['2005-1-1 23:59:00'] == s.ix[0])
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:00:00')
+
def test_partial_not_monotonic(self):
rng = date_range(datetime(2005,1,1), periods=20, freq='M')
ts = Series(np.arange(len(rng)), index=rng)
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index befe3444d98bd..7c7ec3845aa54 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -9,7 +9,7 @@
try:
import dateutil
- from dateutil.parser import parse
+ from dateutil.parser import parse, DEFAULTPARSER
from dateutil.relativedelta import relativedelta
# raise exception if dateutil 2.0 install on 2.x platform
@@ -131,6 +131,7 @@ class DateParseError(ValueError):
qpat1 = re.compile(r'(\d)Q(\d\d)')
qpat2 = re.compile(r'(\d\d)Q(\d)')
ypat = re.compile(r'(\d\d\d\d)$')
+has_time = re.compile('(.+)([\s]|T)+(.+)')
def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
@@ -226,25 +227,61 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
yearfirst = print_config.date_yearfirst
try:
- parsed = parse(arg, dayfirst=dayfirst, yearfirst=yearfirst)
+ parsed, reso = dateutil_parse(arg, default, dayfirst=dayfirst,
+ yearfirst=yearfirst)
except Exception, e:
raise DateParseError(e)
if parsed is None:
raise DateParseError("Could not parse %s" % arg)
- repl = {}
- reso = 'year'
+ return parsed, parsed, reso # datetime, resolution
+
+def dateutil_parse(timestr, default,
+ ignoretz=False, tzinfos=None,
+ **kwargs):
+ """ lifted from dateutil to get resolution"""
+ res = DEFAULTPARSER._parse(timestr, **kwargs)
+
+ if res is None:
+ raise ValueError, "unknown string format"
+ repl = {}
for attr in ["year", "month", "day", "hour",
"minute", "second", "microsecond"]:
- value = getattr(parsed, attr)
- if value is not None and value != 0: # or attr in can_be_zero):
+ value = getattr(res, attr)
+ if value is not None:
repl[attr] = value
reso = attr
- ret = default.replace(**repl)
- return ret, parsed, reso # datetime, resolution
+ if reso == 'microsecond' and repl['microsecond'] == 0:
+ reso = 'second'
+ ret = default.replace(**repl)
+ if res.weekday is not None and not res.day:
+ ret = ret+relativedelta.relativedelta(weekday=res.weekday)
+ if not ignoretz:
+ if callable(tzinfos) or tzinfos and res.tzname in tzinfos:
+ if callable(tzinfos):
+ tzdata = tzinfos(res.tzname, res.tzoffset)
+ else:
+ tzdata = tzinfos.get(res.tzname)
+ if isinstance(tzdata, datetime.tzinfo):
+ tzinfo = tzdata
+ elif isinstance(tzdata, basestring):
+ tzinfo = tz.tzstr(tzdata)
+ elif isinstance(tzdata, int):
+ tzinfo = tz.tzoffset(res.tzname, tzdata)
+ else:
+ raise ValueError, "offset must be tzinfo subclass, " \
+ "tz string, or int offset"
+ ret = ret.replace(tzinfo=tzinfo)
+ elif res.tzname and res.tzname in time.tzname:
+ ret = ret.replace(tzinfo=tz.tzlocal())
+ elif res.tzoffset == 0:
+ ret = ret.replace(tzinfo=tz.tzutc())
+ elif res.tzoffset:
+ ret = ret.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset))
+ return ret, reso
def _attempt_monthly(val):
pats = ['%Y-%m', '%m-%Y', '%b %Y', '%b-%Y']
| https://api.github.com/repos/pandas-dev/pandas/pulls/2369 | 2012-11-27T19:03:14Z | 2012-12-02T17:10:49Z | 2012-12-02T17:10:49Z | 2012-12-02T17:10:49Z | |
BLD: temporary workaround for travis numpy/py3 woes | diff --git a/.travis.yml b/.travis.yml
index e90a83257e210..3cfb4af167038 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -7,7 +7,17 @@ python:
- 3.2
install:
- - pip install --use-mirrors cython numpy nose pytz python-dateutil
+ - export PYTHONIOENCODING=utf8 # activate venv 1.8.4 "detach" fix
+ - virtualenv --version
+ - whoami
+ - pwd
+ # install 1.7.0b2 for 3.3, and pull a version of numpy git master
+ # with a alternate fix for detach bug as a temporary workaround
+ # for the others.
+ - "if [ $TRAVIS_PYTHON_VERSION == '3.3' ]; then pip uninstall numpy; pip install http://downloads.sourceforge.net/project/numpy/NumPy/1.7.0b2/numpy-1.7.0b2.tar.gz; fi"
+ - "if [ $TRAVIS_PYTHON_VERSION == '3.2' ] || [ $TRAVIS_PYTHON_VERSION == '3.1' ]; then pip install --use-mirrors git+git://github.com/numpy/numpy.git@089bfa5865cd39e2b40099755e8563d8f0d04f5f#egg=numpy; fi"
+ - "if [ ${TRAVIS_PYTHON_VERSION:0:1} == '2' ]; then pip install numpy; fi" # should be nop if pre-installed
+ - pip install --use-mirrors cython nose pytz python-dateutil
script:
- python setup.py build_ext install
| note that py3 is now tested against numpy recent git, not pypi, just until
the travis people sort things out.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2365 | 2012-11-27T06:36:31Z | 2012-11-27T06:51:37Z | 2012-11-27T06:51:37Z | 2013-03-26T12:52:12Z |
doc updates for whatsnew v0.10.0 | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 831fafe5e531d..4a37222b6dd4c 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -560,6 +560,15 @@ The dimension of the returned result can also change:
In [11]: grouped.apply(f)
+``apply`` on a Series can operate on a returned value from the applied function, that is itself a series, and possibly upcast the result to a DataFrame
+
+.. ipython:: python
+
+ def f(x):
+ return Series([ x, x**2 ], index = ['x', 'x^s'])
+ s = Series(np.random.rand(5))
+ s
+ s.apply(f)
Other useful features
---------------------
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
new file mode 100644
index 0000000000000..8a5652523dfda
--- /dev/null
+++ b/doc/source/v0.10.0.txt
@@ -0,0 +1,37 @@
+.. _whatsnew_0100:
+
+v0.10.0 (December ??, 2012)
+---------------------------
+
+This is a major release from 0.9.1 and includes several new features and
+enhancements along with a large number of bug fixes.
+
+New features
+~~~~~~~~~~~~
+
+
+API changes
+~~~~~~~~~~~
+
+ - ``Series.apply`` will now operate on a returned value from the applied function, that is itself a series, and possibly upcast the result to a DataFrame
+
+ .. ipython:: python
+
+ def f(x):
+ return Series([ x, x**2 ], index = ['x', 'x^s'])
+ s = Series(np.random.rand(5))
+ s
+ s.apply(f)
+
+ This is conceptually similar to the following.
+
+ .. ipython:: python
+
+ concat([ f(y) for x, y in s.iteritems() ], axis=1).T
+
+
+See the `full release notes
+<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
+on GitHub for a complete list.
+
+.. _GH2316: https://github.com/pydata/pandas/issues/2316
diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index 163983e7d933d..82ed64680f1eb 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -16,6 +16,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: v0.10.0.txt
+
.. include:: v0.9.1.txt
.. include:: v0.9.0.txt
| added whatsnew for 0.10.0
added new apply functionality to whatsnew and flexible apply in groupby
| https://api.github.com/repos/pandas-dev/pandas/pulls/2364 | 2012-11-27T03:11:11Z | 2012-11-27T22:25:41Z | null | 2014-07-08T19:39:42Z |
added an example of boolean selection using OR | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index c2ef0d74ced53..c5dedcca3568f 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -189,6 +189,7 @@ Using a boolean vector to index a Series works exactly as in a numpy ndarray:
s[s > 0]
s[(s < 0) & (s > -0.5)]
+ s[(s < -1) | (s > 1 )]
You may select rows from a DataFrame using a boolean vector the same length as
the DataFrame's index (for example, something derived from one of the columns
| as a follow-up to this [SO post](http://stackoverflow.com/questions/13572576/using-or-to-select-data-in-pandas/13572798#13572798) I thought it might be nice to have an example in the docs
| https://api.github.com/repos/pandas-dev/pandas/pulls/2362 | 2012-11-26T21:26:08Z | 2012-11-27T00:02:13Z | null | 2014-06-27T22:56:15Z |
endianess fixes for #2318 | diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index abb160fc0d00b..b421d1083d6cc 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -171,6 +171,8 @@ def test_fperr_robustness(self):
data = '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1a@\xaa\xaa\xaa\xaa\xaa\xaa\x02@8\x8e\xe38\x8e\xe3\xe8?z\t\xed%\xb4\x97\xd0?\xa2\x0c<\xdd\x9a\x1f\xb6?\x82\xbb\xfa&y\x7f\x9d?\xac\'\xa7\xc4P\xaa\x83?\x90\xdf\xde\xb0k8j?`\xea\xe9u\xf2zQ?*\xe37\x9d\x98N7?\xe2.\xf5&v\x13\x1f?\xec\xc9\xf8\x19\xa4\xb7\x04?\x90b\xf6w\x85\x9f\xeb>\xb5A\xa4\xfaXj\xd2>F\x02\xdb\xf8\xcb\x8d\xb8>.\xac<\xfb\x87^\xa0>\xe8:\xa6\xf9_\xd3\x85>\xfb?\xe2cUU\xfd?\xfc\x7fA\xed8\x8e\xe3?\xa5\xaa\xac\x91\xf6\x12\xca?n\x1cs\xb6\xf9a\xb1?\xe8%D\xf3L-\x97?5\xddZD\x11\xe7~?#>\xe7\x82\x0b\x9ad?\xd9R4Y\x0fxK?;7x;\nP2?N\xf4JO\xb8j\x18?4\xf81\x8a%G\x00?\x9a\xf5\x97\r2\xb4\xe5>\xcd\x9c\xca\xbcB\xf0\xcc>3\x13\x87(\xd7J\xb3>\x99\x19\xb4\xe0\x1e\xb9\x99>ff\xcd\x95\x14&\x81>\x88\x88\xbc\xc7p\xddf>`\x0b\xa6_\x96|N>@\xb2n\xea\x0eS4>U\x98\x938i\x19\x1b>\x8eeb\xd0\xf0\x10\x02>\xbd\xdc-k\x96\x16\xe8=(\x93\x1e\xf2\x0e\x0f\xd0=\xe0n\xd3Bii\xb5=*\xe9\x19Y\x8c\x8c\x9c=\xc6\xf0\xbb\x90]\x08\x83=]\x96\xfa\xc0|`i=>d\xfc\xd5\xfd\xeaP=R0\xfb\xc7\xa7\x8e6=\xc2\x95\xf9_\x8a\x13\x1e=\xd6c\xa6\xea\x06\r\x04=r\xda\xdd8\t\xbc\xea<\xf6\xe6\x93\xd0\xb0\xd2\xd1<\x9d\xdeok\x96\xc3\xb7<&~\xea9s\xaf\x9f<UUUUUU\x13@q\x1c\xc7q\x1c\xc7\xf9?\xf6\x12\xdaKh/\xe1?\xf2\xc3"e\xe0\xe9\xc6?\xed\xaf\x831+\x8d\xae?\xf3\x1f\xad\xcb\x1c^\x94?\x15\x1e\xdd\xbd>\xb8\x02@\xc6\xd2&\xfd\xa8\xf5\xe8?\xd9\xe1\x19\xfe\xc5\xa3\xd0?v\x82"\xa8\xb2/\xb6?\x9dX\x835\xee\x94\x9d?h\x90W\xce\x9e\xb8\x83?\x8a\xc0th~Kj?\\\x80\xf8\x9a\xa9\x87Q?%\xab\xa0\xce\x8c_7?1\xe4\x80\x13\x11*\x1f? \x98\x00\r\xb6\xc6\x04?\x80u\xabf\x9d\xb3\xeb>UNrD\xbew\xd2>\x1c\x13C[\xa8\x9f\xb8>\x12b\xd7<pj\xa0>m-\x1fQ@\xe3\x85>\xe6\x91)l\x00/m>Da\xc6\xf2\xaatS>\x05\xd7]\xee\xe3\xf09>'
arr = np.frombuffer(data, dtype='<f8')
+ if sys.byteorder != "little":
+ arr = arr.byteswap().newbyteorder()
result = mom.rolling_sum(arr, 2)
self.assertTrue((result[1:] >= 0).all())
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index c88360b7f46f1..d1b559c4f63a5 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1938,7 +1938,7 @@ def test_from_M8_structured(self):
dates = [ (datetime(2012, 9, 9, 0, 0),
datetime(2012, 9, 8, 15, 10))]
arr = np.array(dates,
- dtype=[('Date', '<M8[us]'), ('Forecasting', '<M8[us]')])
+ dtype=[('Date', 'M8[us]'), ('Forecasting', 'M8[us]')])
df = DataFrame(arr)
self.assertEqual(df['Date'][0], dates[0][0])
| pending fix verifcation from @juliantaylor
| https://api.github.com/repos/pandas-dev/pandas/pulls/2359 | 2012-11-25T22:55:40Z | 2012-11-26T22:58:59Z | null | 2014-06-28T22:42:58Z |
BUG: hash random. in 3.3 exposed reliance on dict traversal order. #2331 | diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 04c3f80df669a..91e0bf8a7aa69 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -10,6 +10,7 @@
from pandas.core.series import Series
from pandas.core.panel import Panel
from pandas.util.decorators import cache_readonly, Appender
+from pandas.util.compat import OrderedDict
import pandas.core.algorithms as algos
import pandas.core.common as com
import pandas.lib as lib
@@ -1525,7 +1526,7 @@ def aggregate(self, arg, *args, **kwargs):
if isinstance(arg, basestring):
return getattr(self, arg)(*args, **kwargs)
- result = {}
+ result = OrderedDict()
if isinstance(arg, dict):
if self.axis != 0: # pragma: no cover
raise ValueError('Can only pass dict with axis=0')
@@ -1533,7 +1534,7 @@ def aggregate(self, arg, *args, **kwargs):
obj = self._obj_with_exclusions
if any(isinstance(x, (list, tuple, dict)) for x in arg.values()):
- new_arg = {}
+ new_arg = OrderedDict()
for k, v in arg.iteritems():
if not isinstance(v, (tuple, list, dict)):
new_arg[k] = [v]
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 542e5ee964362..eade7d357dfad 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -793,7 +793,8 @@ def test_dict_entries(self):
df = DataFrame({'A': [{'a':1, 'b':2}]})
val = df.to_string()
- self.assertTrue("{'a': 1, 'b': 2}" in val)
+ self.assertTrue("'a': 1" in val)
+ self.assertTrue("'b': 2" in val)
def test_to_latex(self):
# it works!
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 8361bae1f890b..c948fc28ca333 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -1859,6 +1859,7 @@ def test_agg_multiple_functions_too_many_lambdas(self):
def test_more_flexible_frame_multi_function(self):
from pandas import concat
+ from pandas.util.compat import OrderedDict
grouped = self.df.groupby('A')
@@ -1868,8 +1869,8 @@ def test_more_flexible_frame_multi_function(self):
expected = concat([exmean, exstd], keys=['mean', 'std'], axis=1)
expected = expected.swaplevel(0, 1, axis=1).sortlevel(0, axis=1)
- result = grouped.aggregate({'C' : [np.mean, np.std],
- 'D' : [np.mean, np.std]})
+ d=OrderedDict([['C',[np.mean, np.std]],['D',[np.mean, np.std]]])
+ result = grouped.aggregate(d)
assert_frame_equal(result, expected)
@@ -1884,10 +1885,12 @@ def test_more_flexible_frame_multi_function(self):
def foo(x): return np.mean(x)
def bar(x): return np.std(x, ddof=1)
result = grouped.aggregate({'C' : np.mean,
- 'D' : {'foo': np.mean,
- 'bar': np.std}})
+ 'D' : OrderedDict([['foo', np.mean],
+ ['bar', np.std]])})
+
expected = grouped.aggregate({'C' : [np.mean],
'D' : [foo, bar]})
+
assert_frame_equal(result, expected)
def test_multi_function_flexible_mix(self):
@@ -1896,7 +1899,7 @@ def test_multi_function_flexible_mix(self):
grouped = self.df.groupby('A')
result = grouped.aggregate({'C' : {'foo' : 'mean',
- 'bar' : 'std'},
+ 'bar' : 'std'},
'D' : 'sum'})
result2 = grouped.aggregate({'C' : {'foo' : 'mean',
'bar' : 'std'},
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 82b3a0fa7115e..bc1770d58b0bc 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -339,7 +339,10 @@ def get_period_alias(offset_str):
_offset_map[_name] = offsets.WeekOfMonth(week=_iweek, weekday=_i)
_rule_aliases[_name.replace('-', '@')] = _name
-_legacy_reverse_map = dict((v, k) for k, v in _rule_aliases.iteritems())
+# Note that _rule_aliases is not 1:1 (d[BA]==d[A@DEC]), and so traversal
+# order matters when constructing an inverse. we pick one. #2331
+_legacy_reverse_map = dict((v, k) for k, v in
+ reversed(sorted(_rule_aliases.iteritems())))
# for helping out with pretty-printing and name-lookups
diff --git a/pandas/util/compat.py b/pandas/util/compat.py
index 894f94d11a8b8..613f6a06d3f30 100644
--- a/pandas/util/compat.py
+++ b/pandas/util/compat.py
@@ -12,3 +12,269 @@ def product(*args, **kwds):
result = [x + [y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
+
+
+# OrderedDict Shim from Raymond Hettinger, python core dev
+# http://code.activestate.com/recipes/576693-ordered-dictionary-for-py24/
+# here to support versions before 2.6
+import sys
+try:
+ from thread import get_ident as _get_ident
+except ImportError:
+ from dummy_thread import get_ident as _get_ident
+
+try:
+ from _abcoll import KeysView, ValuesView, ItemsView
+except ImportError:
+ pass
+
+
+class _OrderedDict(dict):
+ 'Dictionary that remembers insertion order'
+ # An inherited dict maps keys to values.
+ # The inherited dict provides __getitem__, __len__, __contains__, and get.
+ # The remaining methods are order-aware.
+ # Big-O running times for all methods are the same as for regular dictionaries.
+
+ # The internal self.__map dictionary maps keys to links in a doubly linked list.
+ # The circular doubly linked list starts and ends with a sentinel element.
+ # The sentinel element never gets deleted (this simplifies the algorithm).
+ # Each link is stored as a list of length three: [PREV, NEXT, KEY].
+
+ def __init__(self, *args, **kwds):
+ '''Initialize an ordered dictionary. Signature is the same as for
+ regular dictionaries, but keyword arguments are not recommended
+ because their insertion order is arbitrary.
+
+ '''
+ if len(args) > 1:
+ raise TypeError('expected at most 1 arguments, got %d' % len(args))
+ try:
+ self.__root
+ except AttributeError:
+ self.__root = root = [] # sentinel node
+ root[:] = [root, root, None]
+ self.__map = {}
+ self.__update(*args, **kwds)
+
+ def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
+ 'od.__setitem__(i, y) <==> od[i]=y'
+ # Setting a new item creates a new link which goes at the end of the linked
+ # list, and the inherited dictionary is updated with the new key/value pair.
+ if key not in self:
+ root = self.__root
+ last = root[0]
+ last[1] = root[0] = self.__map[key] = [last, root, key]
+ dict_setitem(self, key, value)
+
+ def __delitem__(self, key, dict_delitem=dict.__delitem__):
+ 'od.__delitem__(y) <==> del od[y]'
+ # Deleting an existing item uses self.__map to find the link which is
+ # then removed by updating the links in the predecessor and successor nodes.
+ dict_delitem(self, key)
+ link_prev, link_next, key = self.__map.pop(key)
+ link_prev[1] = link_next
+ link_next[0] = link_prev
+
+ def __iter__(self):
+ 'od.__iter__() <==> iter(od)'
+ root = self.__root
+ curr = root[1]
+ while curr is not root:
+ yield curr[2]
+ curr = curr[1]
+
+ def __reversed__(self):
+ 'od.__reversed__() <==> reversed(od)'
+ root = self.__root
+ curr = root[0]
+ while curr is not root:
+ yield curr[2]
+ curr = curr[0]
+
+ def clear(self):
+ 'od.clear() -> None. Remove all items from od.'
+ try:
+ for node in self.__map.itervalues():
+ del node[:]
+ root = self.__root
+ root[:] = [root, root, None]
+ self.__map.clear()
+ except AttributeError:
+ pass
+ dict.clear(self)
+
+ def popitem(self, last=True):
+ '''od.popitem() -> (k, v), return and remove a (key, value) pair.
+ Pairs are returned in LIFO order if last is true or FIFO order if false.
+
+ '''
+ if not self:
+ raise KeyError('dictionary is empty')
+ root = self.__root
+ if last:
+ link = root[0]
+ link_prev = link[0]
+ link_prev[1] = root
+ root[0] = link_prev
+ else:
+ link = root[1]
+ link_next = link[1]
+ root[1] = link_next
+ link_next[0] = root
+ key = link[2]
+ del self.__map[key]
+ value = dict.pop(self, key)
+ return key, value
+
+ # -- the following methods do not depend on the internal structure --
+
+ def keys(self):
+ 'od.keys() -> list of keys in od'
+ return list(self)
+
+ def values(self):
+ 'od.values() -> list of values in od'
+ return [self[key] for key in self]
+
+ def items(self):
+ 'od.items() -> list of (key, value) pairs in od'
+ return [(key, self[key]) for key in self]
+
+ def iterkeys(self):
+ 'od.iterkeys() -> an iterator over the keys in od'
+ return iter(self)
+
+ def itervalues(self):
+ 'od.itervalues -> an iterator over the values in od'
+ for k in self:
+ yield self[k]
+
+ def iteritems(self):
+ 'od.iteritems -> an iterator over the (key, value) items in od'
+ for k in self:
+ yield (k, self[k])
+
+ def update(*args, **kwds):
+ '''od.update(E, **F) -> None. Update od from dict/iterable E and F.
+
+ If E is a dict instance, does: for k in E: od[k] = E[k]
+ If E has a .keys() method, does: for k in E.keys(): od[k] = E[k]
+ Or if E is an iterable of items, does: for k, v in E: od[k] = v
+ In either case, this is followed by: for k, v in F.items(): od[k] = v
+
+ '''
+ if len(args) > 2:
+ raise TypeError('update() takes at most 2 positional '
+ 'arguments (%d given)' % (len(args),))
+ elif not args:
+ raise TypeError('update() takes at least 1 argument (0 given)')
+ self = args[0]
+ # Make progressively weaker assumptions about "other"
+ other = ()
+ if len(args) == 2:
+ other = args[1]
+ if isinstance(other, dict):
+ for key in other:
+ self[key] = other[key]
+ elif hasattr(other, 'keys'):
+ for key in other.keys():
+ self[key] = other[key]
+ else:
+ for key, value in other:
+ self[key] = value
+ for key, value in kwds.items():
+ self[key] = value
+
+ __update = update # let subclasses override update without breaking __init__
+
+ __marker = object()
+
+ def pop(self, key, default=__marker):
+ '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value.
+ If key is not found, d is returned if given, otherwise KeyError is raised.
+
+ '''
+ if key in self:
+ result = self[key]
+ del self[key]
+ return result
+ if default is self.__marker:
+ raise KeyError(key)
+ return default
+
+ def setdefault(self, key, default=None):
+ 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
+ if key in self:
+ return self[key]
+ self[key] = default
+ return default
+
+ def __repr__(self, _repr_running={}):
+ 'od.__repr__() <==> repr(od)'
+ call_key = id(self), _get_ident()
+ if call_key in _repr_running:
+ return '...'
+ _repr_running[call_key] = 1
+ try:
+ if not self:
+ return '%s()' % (self.__class__.__name__,)
+ return '%s(%r)' % (self.__class__.__name__, self.items())
+ finally:
+ del _repr_running[call_key]
+
+ def __reduce__(self):
+ 'Return state information for pickling'
+ items = [[k, self[k]] for k in self]
+ inst_dict = vars(self).copy()
+ for k in vars(OrderedDict()):
+ inst_dict.pop(k, None)
+ if inst_dict:
+ return (self.__class__, (items,), inst_dict)
+ return self.__class__, (items,)
+
+ def copy(self):
+ 'od.copy() -> a shallow copy of od'
+ return self.__class__(self)
+
+ @classmethod
+ def fromkeys(cls, iterable, value=None):
+ '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S
+ and values equal to v (which defaults to None).
+
+ '''
+ d = cls()
+ for key in iterable:
+ d[key] = value
+ return d
+
+ def __eq__(self, other):
+ '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive
+ while comparison to a regular mapping is order-insensitive.
+
+ '''
+ if isinstance(other, OrderedDict):
+ return len(self)==len(other) and self.items() == other.items()
+ return dict.__eq__(self, other)
+
+ def __ne__(self, other):
+ return not self == other
+
+ # -- the following methods are only used in Python 2.7 --
+
+ def viewkeys(self):
+ "od.viewkeys() -> a set-like object providing a view on od's keys"
+ return KeysView(self)
+
+ def viewvalues(self):
+ "od.viewvalues() -> an object providing a view on od's values"
+ return ValuesView(self)
+
+ def viewitems(self):
+ "od.viewitems() -> a set-like object providing a view on od's items"
+ return ItemsView(self)
+
+if sys.version_info[:2] < (2,7):
+ OrderedDict=_OrderedDict
+else:
+ from collections import OrderedDict
| fixes all failing tests mentioned in #2331, perhaps others lurking.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2358 | 2012-11-25T22:26:35Z | 2012-11-27T02:34:22Z | null | 2014-06-19T00:06:44Z |
CLN: more test_perf cleanups | diff --git a/vb_suite/test_perf.py b/vb_suite/test_perf.py
index 0d11f403b7651..85a21382f1f44 100755
--- a/vb_suite/test_perf.py
+++ b/vb_suite/test_perf.py
@@ -18,14 +18,11 @@
---
These are the steps taken:
1) create a temp directory into which vbench will clone the temporary repo.
-2) parse the Git tree to obtain metadata, and determine the HEAD.
-3) instantiate a vbench runner, using the local repo as the source repo.
-4) If results for the BASELINE_COMMIT aren't already in the db, have vbench
-do a run for it and store the results.
-5) perform a vbench run for HEAD and store the results.
-6) pull the results for both commits from the db. use pandas to align
+2) instantiate a vbench runner, using the local repo as the source repo.
+3) perform a vbench run for the baseline commit, then the target commit.
+4) pull the results for both commits from the db. use pandas to align
everything and calculate a ration for the timing information.
-7) print the results to the log file and to stdout.
+5) print the results to the log file and to stdout.
"""
@@ -33,11 +30,10 @@
import os
import argparse
import tempfile
-
-from pandas import DataFrame
+import time
DEFAULT_MIN_DURATION = 0.01
-BASELINE_COMMIT = 'bdbca8e3dc' # 9,1 + regression fix # TODO: detect upstream/master
+BASELINE_COMMIT = '2149c50' # 0.9.1 + regression fix + vb fixes # TODO: detect upstream/master
parser = argparse.ArgumentParser(description='Use vbench to generate a report comparing performance between two commits.')
parser.add_argument('-a', '--auto',
@@ -61,6 +57,7 @@
args = parser.parse_args()
def get_results_df(db,rev):
+ from pandas import DataFrame
"""Takes a git commit hash and returns a Dataframe of benchmark results
"""
bench = DataFrame(db.get_benchmarks())
@@ -76,6 +73,7 @@ def prprint(s):
print("*** %s"%s)
def main():
+ from pandas import DataFrame
from vbench.api import BenchmarkRunner
from vbench.db import BenchmarkDB
from suite import REPO_PATH, BUILD, DB_PATH, PREPARE, dependencies, benchmarks
@@ -98,14 +96,9 @@ def main():
try:
logfile = open(args.log_file, 'w')
- prprint( "Processing Repo at '%s'..." % REPO_PATH)
-
- # get hashes of baseline and current head
-
prprint( "Opening DB at '%s'...\n" % DB_PATH)
db = BenchmarkDB(DB_PATH)
-
prprint("Initializing Runner...")
runner = BenchmarkRunner(benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
TMP_DIR, PREPARE, always_clean=True,
@@ -114,8 +107,8 @@ def main():
repo = runner.repo #(steal the parsed git repo used by runner)
- # ARGH. reparse the repo, not discarding any commits,
- # and overwrite the previous parse results
+ # ARGH. reparse the repo, without discarding any commits,
+ # then overwrite the previous parse results
#prprint ("Slaughtering kittens..." )
(repo.shas, repo.messages,
repo.timestamps, repo.authors) = _parse_commit_log(REPO_PATH)
@@ -126,7 +119,6 @@ def main():
prprint('Target [%s] : %s\n' % (h_head, repo.messages.get(h_head,"")))
prprint('Baseline [%s] : %s\n' % (h_baseline,repo.messages.get(h_baseline,"")))
-
prprint ("removing any previous measurements for the commits." )
db.delete_rev_results(h_baseline)
db.delete_rev_results(h_head)
@@ -152,7 +144,9 @@ def main():
totals = totals.ix[totals.t_head > args.min_duration] # ignore below threshold
totals = totals.dropna().sort("ratio").set_index('name') # sort in ascending order
- s = "\n\nResults:\n" + totals.to_string(float_format=lambda x: "%0.4f" %x) + "\n\n"
+ s = "\n\nResults:\n"
+ s += totals.to_string(float_format=lambda x: "{:4.4f}".format(x).rjust(10))
+ s += "\n\n"
s += "Columns: test_name | target_duration [ms] | baseline_duration [ms] | ratio\n\n"
s += "- a Ratio of 1.30 means the target commit is 30% slower then the baseline.\n\n"
| https://api.github.com/repos/pandas-dev/pandas/pulls/2353 | 2012-11-25T12:31:20Z | 2012-11-26T23:57:52Z | null | 2014-07-20T16:06:46Z | |
Support for datetime columns with rpy2 | diff --git a/pandas/rpy/common.py b/pandas/rpy/common.py
index 481714b94386c..d2a9eaefffd1b 100644
--- a/pandas/rpy/common.py
+++ b/pandas/rpy/common.py
@@ -197,6 +197,40 @@ def convert_robj(obj, use_pandas=True):
raise Exception('Do not know what to do with %s object' % type(obj))
+
+def convert_to_r_posixct(obj):
+ """
+ Convert DatetimeIndex or np.datetime array to R POSIXct using
+ m8[s] format.
+
+ Parameters
+ ----------
+ obj : source pandas object (one of [DatetimeIndex, np.datetime])
+
+ Returns
+ -------
+ An R POSIXct vector (rpy2.robjects.vectors.POSIXct)
+
+ """
+ import time
+ from rpy2.rinterface import StrSexpVector
+
+ # convert m8[ns] to m8[s]
+ vals = robj.vectors.FloatSexpVector(obj.values.view('i8') / 1E9)
+ as_posixct = robj.baseenv.get('as.POSIXct')
+ origin = StrSexpVector([time.strftime("%Y-%m-%d",
+ time.gmtime(0)),])
+
+ # We will be sending ints as UTC
+ tz = obj.tz.zone if hasattr(obj, 'tz') and hasattr(obj.tz, 'zone') else 'UTC'
+ tz = StrSexpVector([tz])
+ utc_tz = StrSexpVector(['UTC'])
+
+ posixct = as_posixct(vals, origin=origin, tz=utc_tz)
+ posixct.do_slot_assign('tzone', tz)
+ return posixct
+
+
VECTOR_TYPES = {np.float64: robj.FloatVector,
np.float32: robj.FloatVector,
np.float: robj.FloatVector,
@@ -242,14 +276,18 @@ def convert_to_r_dataframe(df, strings_as_factors=False):
for column in df:
value = df[column]
value_type = value.dtype.type
- value = [item if pd.notnull(item) else NA_TYPES[value_type]
- for item in value]
- value = VECTOR_TYPES[value_type](value)
+ if value_type == np.datetime64:
+ value = convert_to_r_posixct(value)
+ else:
+ value = [item if pd.notnull(item) else NA_TYPES[value_type]
+ for item in value]
+
+ value = VECTOR_TYPES[value_type](value)
- if not strings_as_factors:
- I = robj.baseenv.get("I")
- value = I(value)
+ if not strings_as_factors:
+ I = robj.baseenv.get("I")
+ value = I(value)
columns[column] = value
| Check to see if the values we're looking at are numpy datetime64
objects. If they are, we can assume we're looking at a Timestamp object,
which subclasses Python's datetime class, and we return a struct_time
which is understood by rpy2's POSIXct class.
Attempt at a fix for #2351
| https://api.github.com/repos/pandas-dev/pandas/pulls/2352 | 2012-11-25T06:59:06Z | 2012-12-01T20:49:06Z | null | 2014-08-11T23:16:20Z |
BUG: del df[k] with non-unique key | diff --git a/pandas/core/common.py b/pandas/core/common.py
index aa7ed9cd6b76f..c6e58e478ec53 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -699,6 +699,23 @@ def iterpairs(seq):
return itertools.izip(seq_it, seq_it_next)
+def split_ranges(mask):
+ """ Generates tuples of ranges which cover all True value in mask
+
+ >>> list(split_ranges([1,0,0,1,0]))
+ [(0, 1), (3, 4)]
+ """
+ ranges = [(0,len(mask))]
+
+ for pos,val in enumerate(mask):
+ if not val: # this pos should be ommited, split off the prefix range
+ r = ranges.pop()
+ if pos > r[0]: # yield non-zero range
+ yield (r[0],pos)
+ if pos+1 < len(mask): # save the rest for processing
+ ranges.append((pos+1,len(mask)))
+ if ranges:
+ yield ranges[-1]
def indent(string, spaces=4):
dent = ' ' * spaces
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 7dd8e4100ef10..035d2531f382f 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -673,7 +673,7 @@ def get_loc(self, key):
Returns
-------
- loc : int
+ loc : int if unique index, possibly slice or mask if not
"""
return self._engine.get_loc(key)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index d54154d0e033e..a2329450a5648 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -181,38 +181,26 @@ def delete(self, item):
def split_block_at(self, item):
"""
- Split block around given column, for "deleting" a column without
- having to copy data by returning views on the original array
+ Split block into zero or more blocks around columns with given label,
+ for "deleting" a column without having to copy data by returning views
+ on the original array.
Returns
-------
- leftb, rightb : (Block or None, Block or None)
+ generator of Block
"""
loc = self.items.get_loc(item)
- if len(self.items) == 1:
- # no blocks left
- return None, None
-
- if loc == 0:
- # at front
- left_block = None
- right_block = make_block(self.values[1:], self.items[1:].copy(),
- self.ref_items)
- elif loc == len(self.values) - 1:
- # at back
- left_block = make_block(self.values[:-1], self.items[:-1].copy(),
- self.ref_items)
- right_block = None
- else:
- # in the middle
- left_block = make_block(self.values[:loc],
- self.items[:loc].copy(), self.ref_items)
- right_block = make_block(self.values[loc + 1:],
- self.items[loc + 1:].copy(),
- self.ref_items)
+ if type(loc) == slice or type(loc) == int:
+ mask = [True]*len(self)
+ mask[loc] = False
+ else: # already a mask, inverted
+ mask = -loc
- return left_block, right_block
+ for s,e in com.split_ranges(mask):
+ yield make_block(self.values[s:e],
+ self.items[s:e].copy(),
+ self.ref_items)
def fillna(self, value, inplace=False):
new_values = self.values if inplace else self.values.copy()
@@ -906,9 +894,12 @@ def delete(self, item):
i, _ = self._find_block(item)
loc = self.items.get_loc(item)
+ self._delete_from_block(i, item)
+ if com._is_bool_indexer(loc): # dupe keys may return mask
+ loc = [i for i,v in enumerate(loc) if v]
+
new_items = self.items.delete(loc)
- self._delete_from_block(i, item)
self.set_items_norename(new_items)
def set(self, item, value):
@@ -970,13 +961,8 @@ def _delete_from_block(self, i, item):
Delete and maybe remove the whole block
"""
block = self.blocks.pop(i)
- new_left, new_right = block.split_block_at(item)
-
- if new_left is not None:
- self.blocks.append(new_left)
-
- if new_right is not None:
- self.blocks.append(new_right)
+ for b in block.split_block_at(item):
+ self.blocks.append(b)
def _add_new_block(self, item, value, loc=None):
# Do we care about dtype at the moment?
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 661c3a2a3edd8..dd93666cba0af 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -117,6 +117,35 @@ def test_iterpairs():
assert(result == expected)
+def test_split_ranges():
+ def _bin(x, width):
+ "return int(x) as a base2 string of given width"
+ return ''.join(str((x>>i)&1) for i in xrange(width-1,-1,-1))
+
+ def test_locs(mask):
+ nfalse = sum(np.array(mask) == 0)
+
+ remaining=0
+ for s, e in com.split_ranges(mask):
+ remaining += e-s
+
+ assert 0 not in mask[s:e]
+
+ # make sure the total items covered by the ranges are a complete cover
+ assert remaining + nfalse == len(mask)
+
+ # exhaustively test all possible mask sequences of length 8
+ ncols=8
+ for i in range(2**ncols):
+ cols=map(int,list(_bin(i,ncols))) # count up in base2
+ mask=[cols[i] == 1 for i in range(len(cols))]
+ test_locs(mask)
+
+ # base cases
+ test_locs([])
+ test_locs([0])
+ test_locs([1])
+
def test_indent():
s = 'a b c\nd e f'
result = com.indent(s, spaces=6)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 955cfedd70466..5e77bfa6c5d8c 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2978,6 +2978,18 @@ def test_pop(self):
foo = self.frame.pop('foo')
self.assert_('foo' not in self.frame)
+ def test_pop_non_unique_cols(self):
+ df=DataFrame({0:[0,1],1:[0,1],2:[4,5]})
+ df.columns=["a","b","a"]
+
+ res=df.pop("a")
+ self.assertEqual(type(res),DataFrame)
+ self.assertEqual(len(res),2)
+ self.assertEqual(len(df.columns),1)
+ self.assertTrue("b" in df.columns)
+ self.assertFalse("a" in df.columns)
+ self.assertEqual(len(df.index),2)
+
def test_iter(self):
self.assert_(tm.equalContents(list(self.frame), self.frame.columns))
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 0610dc92e2379..31ffcc5832758 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -155,22 +155,22 @@ def test_delete(self):
self.assertRaises(Exception, self.fblock.delete, 'b')
def test_split_block_at(self):
- left, right = self.fblock.split_block_at('a')
- self.assert_(left is None)
- self.assert_(np.array_equal(right.items, ['c', 'e']))
+ bs = list(self.fblock.split_block_at('a'))
+ self.assertEqual(len(bs),1)
+ self.assertTrue(np.array_equal(bs[0].items, ['c', 'e']))
- left, right = self.fblock.split_block_at('c')
- self.assert_(np.array_equal(left.items, ['a']))
- self.assert_(np.array_equal(right.items, ['e']))
+ bs = list(self.fblock.split_block_at('c'))
+ self.assertEqual(len(bs),2)
+ self.assertTrue(np.array_equal(bs[0].items, ['a']))
+ self.assertTrue(np.array_equal(bs[1].items, ['e']))
- left, right = self.fblock.split_block_at('e')
- self.assert_(np.array_equal(left.items, ['a', 'c']))
- self.assert_(right is None)
+ bs = list(self.fblock.split_block_at('e'))
+ self.assertEqual(len(bs),1)
+ self.assertTrue(np.array_equal(bs[0].items, ['a', 'c']))
bblock = get_bool_ex(['f'])
- left, right = bblock.split_block_at('f')
- self.assert_(left is None)
- self.assert_(right is None)
+ bs = list(bblock.split_block_at('f'))
+ self.assertEqual(len(bs),0)
def test_unicode_repr(self):
mat = np.empty((N, 2), dtype=object)
| This touches some delicate functions to be messing with, so I've split off the
underlying logic into a function in common and added an exhaustive test.
Would welcome review none the less.
Also, I tried to test for nagative perf. There must something
wrong with the test since comparing the commit after just adding a test to the baseline,
shows 50% degredation in a bunch of things. As I said, I don't trust this, ideas?
```
timeseries_asof_nan 23.4131 15.7471 1.4868
datetimeindex_normalize 1153.1918 774.8010 1.4884
timeseries_asof 22.3342 14.9964 1.4893
reshape_unstack_simple 4.8106 3.2289 1.4899
read_table_multiple_date_baseline 980.0920 657.3641 1.4909
timeseries_1min_5min_mean 0.8087 0.5422 1.4915
timeseries_timestamp_tzinfo_cons 0.0210 0.0141 1.4922
match_strings 0.5718 0.3817 1.4982
timeseries_large_lookup_value 0.0301 0.0201 1.4998
reindex_fillna_pad 0.1828 0.1217 1.5018
timeseries_to_datetime_iso8601 5.6048 3.7270 1.5038
read_table_multiple_date 2169.5440 1441.5460 1.5050
reindex_daterange_pad 0.2608 0.1731 1.5066
timeseries_1min_5min_ohlc 0.8046 0.5331 1.5092
reindex_daterange_backfill 0.2466 0.1634 1.5094
period_setitem 1166.9910 772.4509 1.5108
reindex_fillna_backfill 0.1790 0.1183 1.5130
timeseries_asof_single 0.0656 0.0434 1.5136
append_frame_single_mixed 2.0747 1.3697 1.5147
timeseries_slice_minutely 0.0762 0.0501 1.5208
Columns: test_name | target_duration [ms] | baseline_duration [ms] | ratio
(01bc3e0 against 81169f9)
- a Ratio of 1.30 means the target commit is 30% slower then the baseline.
```
closes #2347
| https://api.github.com/repos/pandas-dev/pandas/pulls/2349 | 2012-11-24T21:51:27Z | 2012-11-25T19:17:47Z | 2012-11-25T19:17:47Z | 2014-06-12T16:15:38Z |
Pytables: bug fixes, code cleanup, and much updated docs | diff --git a/doc/source/io.rst b/doc/source/io.rst
index f74120ad7ef57..5dfc80d12d22b 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1,3 +1,4 @@
+
.. _io:
.. currentmodule:: pandas
@@ -793,17 +794,75 @@ Objects can be written to the file just like adding key-value pairs to a dict:
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
+ # store.put('s', s') is an equivalent method
store['s'] = s
+
store['df'] = df
+
store['wp'] = wp
+
+ # the type of stored data
+ store.handle.root.wp._v_attrs.pandas_type
+
store
In a current or later Python session, you can retrieve stored objects:
.. ipython:: python
+ # store.get('df') is an equivalent method
store['df']
+Deletion of the object specified by the key
+
+.. ipython:: python
+
+ # store.remove('wp') is an equivalent method
+ del store['wp']
+
+ store
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
+
+
+These stores are **not** appendable once written (though you can simply remove them and rewrite). Nor are they **queryable**; they must be retrieved in their entirety.
+
+
+Storing in Table format
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``HDFStore`` supports another ``PyTables`` format on disk, the ``table`` format. Conceptually a ``table`` is shaped
+very much like a DataFrame, with rows and columns. A ``table`` may be appended to in the same or other sessions.
+In addition, delete & query type operations are supported. You can create an index with ``create_table_index``
+after data is already in the table (this may become automatic in the future or an option on appending/putting a ``table``).
+
+.. ipython:: python
+ :suppress:
+ :okexcept:
+
+ os.remove('store.h5')
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ df1 = df[0:4]
+ df2 = df[4:]
+ store.append('df', df1)
+ store.append('df', df2)
+
+ store.select('df')
+
+ # the type of stored data
+ store.handle.root.df._v_attrs.pandas_type
+
+ store.create_table_index('df')
+ store.handle.root.df.table
+
.. ipython:: python
:suppress:
@@ -812,8 +871,68 @@ In a current or later Python session, you can retrieve stored objects:
os.remove('store.h5')
-.. Storing in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~
+Querying objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``select`` and ``delete`` operations have an optional criteria that can be specified to select/delete only
+a subset of the data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
+
+A query is specified using the ``Term`` class under the hood.
+
+ - 'index' refers to the index of a DataFrame
+ - 'major_axis' and 'minor_axis' are supported indexers of the Panel
+
+Valid terms can be created from ``dict, list, tuple, or string``. Objects can be embeded as values. Allowed operations are: ``<, <=, >, >=, =``. ``=`` will be inferred as an implicit set operation (e.g. if 2 or more values are provided). The following are all valid terms.
+
+ - ``dict(field = 'index', op = '>', value = '20121114')``
+ - ``('index', '>', '20121114')``
+ - ``'index>20121114'``
+ - ``('index', '>', datetime(2012,11,14))``
+ - ``('index', ['20121114','20121115'])``
+ - ``('major', '=', Timestamp('2012/11/14'))``
+ - ``('minor_axis', ['A','B'])``
+
+Queries are built up using a list of ``Terms`` (currently only **anding** of terms is supported). An example query for a panel might be specified as follows.
+``['major_axis>20000102', ('minor_axis', '=', ['A','B']) ]``. This is roughly translated to: `major_axis must be greater than the date 20000102 and the minor_axis must be A or B`
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ store.append('wp',wp)
+ store.select('wp',[ 'major_axis>20000102', ('minor_axis', '=', ['A','B']) ])
+
+Delete from objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. ipython:: python
+
+ store.remove('wp', 'index>20000102' )
+ store.select('wp')
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
-.. Querying objects stored in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Notes & Caveats
+~~~~~~~~~~~~~~~
+
+ - Selection by items (the top level panel dimension) is not possible; you always get all of the items in the returned Panel
+ - ``PyTables`` only supports fixed-width string columns in ``tables``. The sizes of a string based indexing column (e.g. *index* or *minor_axis*) are determined as the maximum size of the elements in that axis or by passing the ``min_itemsize`` on the first table creation. If subsequent appends introduce elements in the indexing axis that are larger than the supported indexer, an Exception will be raised (otherwise you could have a silent truncation of these indexers, leading to loss of information).
+ - Mixed-Type Panels/DataFrames are not currently supported (but coming soon)!
+ - Once a ``table`` is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can be appended
+ - You can not append/select/delete to a non-table (table creation is determined on the first append, or by passing ``table=True`` in a put operation)
+
+Performance
+~~~~~~~~~~~
+
+ - ``Tables`` come with a performance penalty as compared to regular stores. The benefit is the ability to append/delete and query (potentially very large amounts of data).
+ Write times are generally longer as compared with regular stores. Query times can be quite fast, especially on an indexed axis.
+ - To delete a lot of data, it is sometimes better to erase the table and rewrite it. ``PyTables`` tends to increase the file size with deletions
+ - In general it is best to store Panels with the most frequently selected dimension in the minor axis and a time/date like dimension in the major axis, but this is not required. Panels can have any major_axis and minor_axis type that is a valid Panel indexer.
+ - No dimensions are currently indexed automagically (in the ``PyTables`` sense); these require an explict call to ``create_table_index``
+ - ``Tables`` offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
+ use the pytables utilities ``ptrepack`` to rewrite the file (and also can change compression methods)
+ - Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index af480b5a6457f..a4897878f7d6e 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -7,6 +7,7 @@
from datetime import datetime, date
import time
+import re
import numpy as np
from pandas import (
@@ -67,12 +68,20 @@
# oh the troubles to reduce import time
_table_mod = None
+_table_supports_index = False
def _tables():
global _table_mod
+ global _table_supports_index
if _table_mod is None:
import tables
_table_mod = tables
+
+ # version requirements
+ major, minor, subv = tables.__version__.split('.')
+ if int(major) >= 2 and int(minor) >= 3:
+ _table_supports_index = True
+
return _table_mod
@@ -188,6 +197,9 @@ def __getitem__(self, key):
def __setitem__(self, key, value):
self.put(key, value)
+ def __delitem__(self, key):
+ return self.remove(key)
+
def __contains__(self, key):
return hasattr(self.handle.root, key)
@@ -295,33 +307,17 @@ def select(self, key, where=None):
Parameters
----------
key : object
- where : list, optional
-
- Must be a list of dict objects of the following forms. Selection can
- be performed on the 'index' or 'column' fields.
-
- Comparison op
- {'field' : 'index',
- 'op' : '>=',
- 'value' : value}
-
- Match single value
- {'field' : 'index',
- 'value' : v1}
-
- Match a set of values
- {'field' : 'index',
- 'value' : [v1, v2, v3]}
+ where : list of Term (or convertable) objects, optional
"""
group = getattr(self.handle.root, key, None)
- if 'table' not in group._v_attrs.pandas_type:
- raise Exception('can only select on objects written as tables')
+ if where is not None and not _is_table_type(group):
+ raise Exception('can only select with where on objects written as tables')
if group is not None:
return self._read_group(group, where)
def put(self, key, value, table=False, append=False,
- compression=None):
+ compression=None, **kwargs):
"""
Store object in HDFStore
@@ -342,7 +338,7 @@ def put(self, key, value, table=False, append=False,
be used.
"""
self._write_to_group(key, value, table=table, append=append,
- comp=compression)
+ comp=compression, **kwargs)
def _get_handler(self, op, kind):
return getattr(self, '_%s_%s' % (op, kind))
@@ -359,18 +355,22 @@ def remove(self, key, where=None):
For Table node, delete specified rows. See HDFStore.select for more
information
- Parameters
- ----------
- key : object
+ Returns
+ -------
+ number of rows removed (or None if not a Table)
+
"""
if where is None:
self.handle.removeNode(self.handle.root, key, recursive=True)
else:
group = getattr(self.handle.root, key, None)
if group is not None:
- self._delete_from_table(group, where)
+ if not _is_table_type(group):
+ raise Exception('can only remove with where on objects written as tables')
+ return self._delete_from_table(group, where)
+ return None
- def append(self, key, value):
+ def append(self, key, value, **kwargs):
"""
Append to Table in file. Node must already exist and be Table
format.
@@ -385,10 +385,58 @@ def append(self, key, value):
Does *not* check if data being appended overlaps with existing
data in the table, so be careful
"""
- self._write_to_group(key, value, table=True, append=True)
+ self._write_to_group(key, value, table=True, append=True, **kwargs)
+
+ def create_table_index(self, key, columns = None, optlevel = None, kind = None):
+ """
+ Create a pytables index on the specified columns
+ note: cannot index Time64Col() currently; PyTables must be >= 2.3.1
+
+
+ Paramaters
+ ----------
+ key : object (the node to index)
+ columns : None or list_like (the columns to index - currently supports index/column)
+ optlevel: optimization level (defaults to 6)
+ kind : kind of index (defaults to 'medium')
+
+ Exceptions
+ ----------
+ raises if the node is not a table
+
+ """
+
+ # version requirements
+ if not _table_supports_index:
+ raise("PyTables >= 2.3 is required for table indexing")
+
+ group = getattr(self.handle.root, key, None)
+ if group is None: return
+
+ if not _is_table_type(group):
+ raise Exception("cannot create table index on a non-table")
+
+ table = getattr(group, 'table', None)
+ if table is None: return
+
+ if columns is None:
+ columns = ['index']
+ if not isinstance(columns, (tuple,list)):
+ columns = [ columns ]
+
+ kw = dict()
+ if optlevel is not None:
+ kw['optlevel'] = optlevel
+ if kind is not None:
+ kw['kind'] = kind
+
+ for c in columns:
+ v = getattr(table.cols,c,None)
+ if v is not None and not v.is_indexed:
+ v.createIndex(**kw)
def _write_to_group(self, key, value, table=False, append=False,
- comp=None):
+ comp=None, **kwargs):
root = self.handle.root
if key not in root._v_children:
group = self.handle.createGroup(root, key)
@@ -400,7 +448,7 @@ def _write_to_group(self, key, value, table=False, append=False,
kind = '%s_table' % kind
handler = self._get_handler(op='write', kind=kind)
wrapper = lambda value: handler(group, value, append=append,
- comp=comp)
+ comp=comp, **kwargs)
else:
if append:
raise ValueError('Can only append to Tables')
@@ -530,7 +578,7 @@ def _read_block_manager(self, group):
return BlockManager(blocks, axes)
- def _write_frame_table(self, group, df, append=False, comp=None):
+ def _write_frame_table(self, group, df, append=False, comp=None, **kwargs):
mat = df.values
values = mat.reshape((1,) + mat.shape)
@@ -540,7 +588,7 @@ def _write_frame_table(self, group, df, append=False, comp=None):
self._write_table(group, items=['value'],
index=df.index, columns=df.columns,
- values=values, append=append, compression=comp)
+ values=values, append=append, compression=comp, **kwargs)
def _write_wide(self, group, panel):
panel._consolidate_inplace()
@@ -549,10 +597,10 @@ def _write_wide(self, group, panel):
def _read_wide(self, group, where=None):
return Panel(self._read_block_manager(group))
- def _write_wide_table(self, group, panel, append=False, comp=None):
+ def _write_wide_table(self, group, panel, append=False, comp=None, **kwargs):
self._write_table(group, items=panel.items, index=panel.major_axis,
columns=panel.minor_axis, values=panel.values,
- append=append, compression=comp)
+ append=append, compression=comp, **kwargs)
def _read_wide_table(self, group, where=None):
return self._read_panel_table(group, where)
@@ -569,10 +617,10 @@ def _write_index(self, group, key, index):
self._write_sparse_intindex(group, key, index)
else:
setattr(group._v_attrs, '%s_variety' % key, 'regular')
- converted, kind, _ = _convert_index(index)
- self._write_array(group, key, converted)
+ converted = _convert_index(index).set_name('index')
+ self._write_array(group, key, converted.values)
node = getattr(group, key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = converted.kind
node._v_attrs.name = index.name
if isinstance(index, (DatetimeIndex, PeriodIndex)):
@@ -629,11 +677,11 @@ def _write_multi_index(self, group, key, index):
index.labels,
index.names)):
# write the level
- conv_level, kind, _ = _convert_index(lev)
level_key = '%s_level%d' % (key, i)
- self._write_array(group, level_key, conv_level)
+ conv_level = _convert_index(lev).set_name(level_key)
+ self._write_array(group, level_key, conv_level.values)
node = getattr(group, level_key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = conv_level.kind
node._v_attrs.name = name
# write the name
@@ -738,22 +786,28 @@ def _write_array(self, group, key, value):
getattr(group, key)._v_attrs.transposed = transposed
def _write_table(self, group, items=None, index=None, columns=None,
- values=None, append=False, compression=None):
+ values=None, append=False, compression=None,
+ min_itemsize = None, **kwargs):
""" need to check for conform to the existing table:
e.g. columns should match """
- # create dict of types
- index_converted, index_kind, index_t = _convert_index(index)
- columns_converted, cols_kind, col_t = _convert_index(columns)
+
+ # create Col types
+ index_converted = _convert_index(index).set_name('index')
+ columns_converted = _convert_index(columns).set_name('column')
# create the table if it doesn't exist (or get it if it does)
if not append:
if 'table' in group:
self.handle.removeNode(group, 'table')
+ else:
+ # check that we are not truncating on our indicies
+ index_converted.maybe_set(min_itemsize = min_itemsize)
+ columns_converted.maybe_set(min_itemsize = min_itemsize)
if 'table' not in group:
# create the table
- desc = {'index': index_t,
- 'column': col_t,
+ desc = {'index' : index_converted.typ,
+ 'column': columns_converted.typ,
'values': _tables().FloatCol(shape=(len(values)))}
options = {'name': 'table',
@@ -775,16 +829,20 @@ def _write_table(self, group, items=None, index=None, columns=None,
# the table must already exist
table = getattr(group, 'table', None)
+ # check that we are not truncating on our indicies
+ index_converted.validate(table)
+ columns_converted.validate(table)
+
# check for backwards incompatibility
if append:
- existing_kind = table._v_attrs.index_kind
- if existing_kind != index_kind:
+ existing_kind = getattr(table._v_attrs,'index_kind',None)
+ if existing_kind is not None and existing_kind != index_converted.kind:
raise TypeError("incompatible kind in index [%s - %s]" %
- (existing_kind, index_kind))
+ (existing_kind, index_converted.kind))
# add kinds
- table._v_attrs.index_kind = index_kind
- table._v_attrs.columns_kind = cols_kind
+ table._v_attrs.index_kind = index_converted.kind
+ table._v_attrs.columns_kind = columns_converted.kind
if append:
existing_fields = getattr(table._v_attrs, 'fields', None)
if (existing_fields is not None and
@@ -916,35 +974,90 @@ def _read_panel_table(self, group, where=None):
lp = DataFrame(new_values, index=new_index, columns=lp.columns)
wp = lp.to_panel()
- if sel.column_filter:
- new_minor = sorted(set(wp.minor_axis) & sel.column_filter)
+ if sel.filter:
+ new_minor = sorted(set(wp.minor_axis) & sel.filter)
wp = wp.reindex(minor=new_minor)
return wp
- def _delete_from_table(self, group, where = None):
+ def _delete_from_table(self, group, where):
+ """ delete rows from a group where condition is True """
table = getattr(group, 'table')
# create the selection
- s = Selection(table, where, table._v_attrs.index_kind)
+ s = Selection(table,where,table._v_attrs.index_kind)
s.select_coords()
# delete the rows in reverse order
- l = list(s.values)
- l.reverse()
- for c in l:
- table.removeRows(c)
- self.handle.flush()
- return len(s.values)
+ l = list(s.values)
+ ln = len(l)
+
+ if ln:
+
+ # if we can do a consecutive removal - do it!
+ if l[0]+ln-1 == l[-1]:
+ table.removeRows(start = l[0], stop = l[-1]+1)
+
+ # one by one
+ else:
+ l.reverse()
+ for c in l:
+ table.removeRows(c)
+
+ self.handle.flush()
+
+ # return the number of rows removed
+ return ln
+
+class Col(object):
+ """ a column description class
+
+ Parameters
+ ----------
+
+ values : the ndarray like converted values
+ kind : a string description of this type
+ typ : the pytables type
+
+ """
+ def __init__(self, values, kind, typ, itemsize = None, **kwargs):
+ self.values = values
+ self.kind = kind
+ self.typ = typ
+ self.itemsize = itemsize
+ self.name = None
+
+ def set_name(self, n):
+ self.name = n
+ return self
+
+ def __iter__(self):
+ return iter(self.values)
+
+ def maybe_set(self, min_itemsize = None, **kwargs):
+ """ maybe set a string col itemsize """
+ if self.kind == 'string' and min_itemsize is not None:
+ if self.typ.itemsize < min_itemsize:
+ self.typ = _tables().StringCol(itemsize = min_itemsize, pos = getattr(self.typ,'pos',None))
+
+ def validate(self, table, **kwargs):
+ """ validate this column for string truncation (or reset to the max size) """
+ if self.kind == 'string':
+
+ # the current column name
+ t = getattr(table.description,self.name,None)
+ if t is not None:
+ if t.itemsize < self.itemsize:
+ raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!" % (self.name,self.itemsize,t.itemsize))
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif isinstance(index, (Int64Index, PeriodIndex)):
atom = _tables().Int64Col()
- return index.values, 'integer', atom
+ return Col(index.values, 'integer', atom)
if isinstance(index, MultiIndex):
raise Exception('MultiIndex not supported here!')
@@ -955,33 +1068,33 @@ def _convert_index(index):
if inferred_type == 'datetime64':
converted = values.view('i8')
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif inferred_type == 'datetime':
converted = np.array([(time.mktime(v.timetuple()) +
v.microsecond / 1E6) for v in values],
dtype=np.float64)
- return converted, 'datetime', _tables().Time64Col()
+ return Col(converted, 'datetime', _tables().Time64Col())
elif inferred_type == 'date':
converted = np.array([time.mktime(v.timetuple()) for v in values],
dtype=np.int32)
- return converted, 'date', _tables().Time32Col()
+ return Col(converted, 'date', _tables().Time32Col())
elif inferred_type == 'string':
converted = np.array(list(values), dtype=np.str_)
itemsize = converted.dtype.itemsize
- return converted, 'string', _tables().StringCol(itemsize)
+ return Col(converted, 'string', _tables().StringCol(itemsize), itemsize = itemsize)
elif inferred_type == 'unicode':
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
elif inferred_type == 'integer':
# take a guess for now, hope the values fit
atom = _tables().Int64Col()
- return np.asarray(values, dtype=np.int64), 'integer', atom
+ return Col(np.asarray(values, dtype=np.int64), 'integer', atom)
elif inferred_type == 'floating':
atom = _tables().Float64Col()
- return np.asarray(values, dtype=np.float64), 'float', atom
+ return Col(np.asarray(values, dtype=np.float64), 'float', atom)
else: # pragma: no cover
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
def _read_array(group, key):
@@ -1088,6 +1201,151 @@ def _alias_to_class(alias):
return _reverse_index_map.get(alias, Index)
+class Term(object):
+ """ create a term object that holds a field, op, and value
+
+ Parameters
+ ----------
+ field : dict, string term expression, or the field to operate (must be a valid index/column type of DataFrame/Panel)
+ op : a valid op (defaults to '=') (optional)
+ >, >=, <, <=, =, != (not equal) are allowed
+ value : a value or list of values (required)
+
+ Returns
+ -------
+ a Term object
+
+ Examples
+ --------
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('major>20121114')
+ Term('minor', ['A','B'])
+
+ """
+
+ _ops = ['<','<=','>','>=','=','!=']
+ _search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
+ _index = ['index','major_axis','major']
+ _column = ['column','minor_axis','minor']
+
+ def __init__(self, field, op = None, value = None, index_kind = None):
+ self.field = None
+ self.op = None
+ self.value = None
+ self.index_kind = index_kind
+ self.filter = None
+ self.condition = None
+
+ # unpack lists/tuples in field
+ if isinstance(field,(tuple,list)):
+ f = field
+ field = f[0]
+ if len(f) > 1:
+ op = f[1]
+ if len(f) > 2:
+ value = f[2]
+
+ # backwards compatible
+ if isinstance(field, dict):
+ self.field = field.get('field')
+ self.op = field.get('op') or '='
+ self.value = field.get('value')
+
+ # passed a term
+ elif isinstance(field,Term):
+ self.field = field.field
+ self.op = field.op
+ self.value = field.value
+
+ # a string expression (or just the field)
+ elif isinstance(field,basestring):
+
+ # is a term is passed
+ s = self._search.match(field)
+ if s is not None:
+ self.field = s.group('field')
+ self.op = s.group('op')
+ self.value = s.group('value')
+
+ else:
+ self.field = field
+
+ # is an op passed?
+ if isinstance(op, basestring) and op in self._ops:
+ self.op = op
+ self.value = value
+ else:
+ self.op = '='
+ self.value = op
+
+ else:
+ raise Exception("Term does not understand the supplied field [%s]" % field)
+
+ # we have valid fields
+ if self.field is None or self.op is None or self.value is None:
+ raise Exception("Could not create this term [%s]" % str(self))
+
+ # valid field name
+ if self.field in self._index:
+ self.field = 'index'
+ elif self.field in self._column:
+ self.field = 'column'
+ else:
+ raise Exception("field is not a valid index/column for this term [%s]" % str(self))
+
+ # we have valid conditions
+ if self.op in ['>','>=','<','<=']:
+ if hasattr(self.value,'__iter__') and len(self.value) > 1:
+ raise Exception("an inequality condition cannot have multiple values [%s]" % str(self))
+
+ if not hasattr(self.value,'__iter__'):
+ self.value = [ self.value ]
+
+ self.eval()
+
+ def __str__(self):
+ return "field->%s,op->%s,value->%s" % (self.field,self.op,self.value)
+
+ __repr__ = __str__
+
+ def eval(self):
+ """ set the numexpr expression for this term """
+
+ # convert values
+ values = [ self.convert_value(v) for v in self.value ]
+
+ # equality conditions
+ if self.op in ['=','!=']:
+
+ # too many values to create the expression?
+ if len(values) <= 61:
+ self.condition = "(%s)" % ' | '.join([ "(%s == %s)" % (self.field,v[0]) for v in values])
+
+ # use a filter after reading
+ else:
+ self.filter = set([ v[1] for v in values ])
+
+ else:
+
+ self.condition = '(%s %s %s)' % (self.field, self.op, values[0][0])
+
+ def convert_value(self, v):
+
+ if self.field == 'index':
+ if self.index_kind == 'datetime64' :
+ return [lib.Timestamp(v).value, None]
+ elif isinstance(v, datetime):
+ return [time.mktime(v.timetuple()), None]
+ elif not isinstance(v, basestring):
+ return [str(v), None]
+
+ # string quoting
+ return ["'" + v + "'", v]
+
class Selection(object):
"""
Carries out a selection operation on a tables.Table object.
@@ -1095,72 +1353,47 @@ class Selection(object):
Parameters
----------
table : tables.Table
- where : list of dicts of the following form
-
- Comparison op
- {'field' : 'index',
- 'op' : '>=',
- 'value' : value}
+ where : list of Terms (or convertable to)
- Match single value
- {'field' : 'index',
- 'value' : v1}
-
- Match a set of values
- {'field' : 'index',
- 'value' : [v1, v2, v3]}
"""
def __init__(self, table, where=None, index_kind=None):
- self.table = table
- self.where = where
+ self.table = table
+ self.where = where
self.index_kind = index_kind
- self.column_filter = None
- self.the_condition = None
- self.conditions = []
- self.values = None
- if where:
- self.generate(where)
+ self.values = None
+ self.condition = None
+ self.filter = None
+ self.terms = self.generate(where)
+
+ # create the numexpr & the filter
+ if self.terms:
+ conds = [ t.condition for t in self.terms if t.condition is not None ]
+ if len(conds):
+ self.condition = "(%s)" % ' & '.join(conds)
+ self.filter = set()
+ for t in self.terms:
+ if t.filter is not None:
+ self.filter |= t.filter
def generate(self, where):
- # and condictions
- for c in where:
- op = c.get('op', None)
- value = c['value']
- field = c['field']
-
- if field == 'index' and self.index_kind == 'datetime64':
- val = lib.Timestamp(value).value
- self.conditions.append('(%s %s %s)' % (field, op, val))
- elif field == 'index' and isinstance(value, datetime):
- value = time.mktime(value.timetuple())
- self.conditions.append('(%s %s %s)' % (field, op, value))
- else:
- self.generate_multiple_conditions(op, value, field)
+ """ where can be a : dict,list,tuple,string """
+ if where is None: return None
- if len(self.conditions):
- self.the_condition = '(' + ' & '.join(self.conditions) + ')'
-
- def generate_multiple_conditions(self, op, value, field):
-
- if op and op == 'in' or isinstance(value, (list, np.ndarray)):
- if len(value) <= 61:
- l = '(' + ' | '.join([ "(%s == '%s')" % (field, v)
- for v in value]) + ')'
- self.conditions.append(l)
- else:
- self.column_filter = set(value)
+ if not isinstance(where, (list,tuple)):
+ where = [ where ]
else:
- if op is None:
- op = '=='
- self.conditions.append('(%s %s "%s")' % (field, op, value))
+ # do we have a single list/tuple
+ if not isinstance(where[0], (list,tuple)):
+ where = [ where ]
+
+ return [ Term(c, index_kind = self.index_kind) for c in where ]
def select(self):
"""
generate the selection
"""
- if self.the_condition:
- self.values = self.table.readWhere(self.the_condition)
-
+ if self.condition is not None:
+ self.values = self.table.readWhere(self.condition)
else:
self.values = self.table.read()
@@ -1168,7 +1401,7 @@ def select_coords(self):
"""
generate the selection
"""
- self.values = self.table.getWhereList(self.the_condition)
+ self.values = self.table.getWhereList(self.condition)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index afd05610e3427..f2eb025b61f63 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -8,10 +8,11 @@
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, Index)
-from pandas.io.pytables import HDFStore, get_store
+from pandas.io.pytables import HDFStore, get_store, Term
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
+from pandas import concat
try:
import tables
@@ -65,6 +66,7 @@ def test_repr(self):
self.store['c'] = tm.makeDataFrame()
self.store['d'] = tm.makePanel()
repr(self.store)
+ str(self.store)
def test_contains(self):
self.store['a'] = tm.makeTimeSeries()
@@ -140,10 +142,53 @@ def test_put_integer(self):
def test_append(self):
df = tm.makeTimeDataFrame()
- self.store.put('c', df[:10], table=True)
+ self.store.append('c', df[:10])
self.store.append('c', df[10:])
tm.assert_frame_equal(self.store['c'], df)
+ self.store.put('d', df[:10], table=True)
+ self.store.append('d', df[10:])
+ tm.assert_frame_equal(self.store['d'], df)
+
+ def test_append_with_strings(self):
+ wp = tm.makePanel()
+ wp2 = wp.rename_axis(dict([ (x,"%s_extra" % x) for x in wp.minor_axis ]), axis = 2)
+
+ self.store.append('s1', wp, min_itemsize = 20)
+ self.store.append('s1', wp2)
+ expected = concat([ wp, wp2], axis = 2)
+ expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
+ tm.assert_panel_equal(self.store['s1'], expected)
+
+ # test truncation of bigger strings
+ self.store.append('s2', wp)
+ self.assertRaises(Exception, self.store.append, 's2', wp2)
+
+ def test_create_table_index(self):
+ wp = tm.makePanel()
+ self.store.append('p5', wp)
+ self.store.create_table_index('p5')
+
+ assert(self.store.handle.root.p5.table.cols.index.is_indexed == True)
+ assert(self.store.handle.root.p5.table.cols.column.is_indexed == False)
+
+ df = tm.makeTimeDataFrame()
+ self.store.append('f', df[:10])
+ self.store.append('f', df[10:])
+ self.store.create_table_index('f')
+
+ # create twice
+ self.store.create_table_index('f')
+
+ # try to index a non-table
+ self.store.put('f2', df)
+ self.assertRaises(Exception, self.store.create_table_index, 'f2')
+
+ # try to change the version supports flag
+ from pandas.io import pytables
+ pytables._table_supports_index = False
+ self.assertRaises(Exception, self.store.create_table_index, 'f')
+
def test_append_diff_item_order(self):
wp = tm.makePanel()
wp1 = wp.ix[:, :10, :]
@@ -174,34 +219,101 @@ def test_remove(self):
self.store.remove('b')
self.assertEquals(len(self.store), 0)
- def test_remove_where_not_exist(self):
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : 'foo'
- }
+ # __delitem__
+ self.store['a'] = ts
+ self.store['b'] = df
+ del self.store['a']
+ del self.store['b']
+ self.assertEquals(len(self.store), 0)
+
+ def test_remove_where(self):
+
+ # non-existance
+ crit1 = Term('index','>','foo')
self.store.remove('a', where=[crit1])
+ # try to remove non-table (with crit)
+ # non-table ok (where = None)
+ wp = tm.makePanel()
+ self.store.put('wp', wp, table=True)
+ self.store.remove('wp', [('column', ['A', 'D'])])
+ rs = self.store.select('wp')
+ expected = wp.reindex(minor_axis = ['B','C'])
+ tm.assert_panel_equal(rs,expected)
+
+ # selectin non-table with a where
+ self.store.put('wp2', wp, table=False)
+ self.assertRaises(Exception, self.store.remove,
+ 'wp2', [('column', ['A', 'D'])])
+
+
def test_remove_crit(self):
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = Term('index','>',date)
+ crit2 = Term('column',['A', 'D'])
self.store.remove('wp', where=[crit1])
self.store.remove('wp', where=[crit2])
result = self.store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
tm.assert_panel_equal(result, expected)
+ # test non-consecutive row removal
+ wp = tm.makePanel()
+ self.store.put('wp2', wp, table=True)
+
+ date1 = wp.major_axis[1:3]
+ date2 = wp.major_axis[5]
+ date3 = [wp.major_axis[7],wp.major_axis[9]]
+
+ crit1 = Term('index',date1)
+ crit2 = Term('index',date2)
+ crit3 = Term('index',date3)
+
+ self.store.remove('wp2', where=[crit1])
+ self.store.remove('wp2', where=[crit2])
+ self.store.remove('wp2', where=[crit3])
+ result = self.store['wp2']
+
+ ma = list(wp.major_axis)
+ for d in date1:
+ ma.remove(d)
+ ma.remove(date2)
+ for d in date3:
+ ma.remove(d)
+ expected = wp.reindex(major = ma)
+ tm.assert_panel_equal(result, expected)
+
+ def test_terms(self):
+
+ terms = [
+ dict(field = 'index', op = '>', value = '20121114'),
+ ('index', '20121114'),
+ ('index', '>', '20121114'),
+ ('index', ['20121114','20121114']),
+ ('index', datetime(2012,11,14)),
+ 'index>20121114',
+ 'major>20121114',
+ 'major_axis>20121114',
+ ('minor', ['A','B']),
+ ('minor_axis', ['A','B']),
+ ('column', ['A','B']),
+ ]
+
+ self.assertRaises(Exception, Term.__init__)
+ self.assertRaises(Exception, Term.__init__, 'blah')
+ self.assertRaises(Exception, Term.__init__, 'index')
+ self.assertRaises(Exception, Term.__init__, 'index', '==')
+ self.assertRaises(Exception, Term.__init__, 'index', '>', 5)
+
+ # test em
+ wp = tm.makePanel()
+ self.store.put('wp', wp, table=True)
+ for t in terms:
+ self.store.select('wp', t)
+
def test_series(self):
s = tm.makeStringSeries()
self._check_roundtrip(s, tm.assert_series_equal)
@@ -521,20 +633,28 @@ def test_overwrite_node(self):
tm.assert_series_equal(self.store['a'], ts)
+ def test_select(self):
+ wp = tm.makePanel()
+
+ # put/select ok
+ self.store.put('wp', wp, table=True)
+ self.store.select('wp')
+
+ # non-table ok (where = None)
+ self.store.put('wp2', wp, table=False)
+ self.store.select('wp2')
+
+ # selectin non-table with a where
+ self.assertRaises(Exception, self.store.select,
+ 'wp2', ('column', ['A', 'D']))
+
def test_panel_select(self):
wp = tm.makePanel()
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column', ['A', 'D'])
result = self.store.select('wp', [crit1, crit2])
expected = wp.truncate(before=date).reindex(minor=['A', 'D'])
@@ -545,19 +665,9 @@ def test_frame_select(self):
self.store.put('frame', df, table=True)
date = df.index[len(df) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
- crit3 = {
- 'field' : 'column',
- 'value' : 'A'
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column',['A', 'D'])
+ crit3 = ('column','A')
result = self.store.select('frame', [crit1, crit2])
expected = df.ix[date:, ['A', 'D']]
@@ -578,10 +688,7 @@ def test_select_filter_corner(self):
df.columns = ['%.3d' % c for c in df.columns]
self.store.put('frame', df, table=True)
- crit = {
- 'field' : 'column',
- 'value' : df.columns[:75]
- }
+ crit = Term('column', df.columns[:75])
result = self.store.select('frame', [crit])
tm.assert_frame_equal(result, df.ix[:, df.columns[:75]])
| Should be completely backwards compatible
mainly bug fixes, code clean, and much updated docs
should merge cleanly
1. added **str** (to do **repr**)
2. added **delitem** to support store deletion syntatic sugar
3. row removal in tables is much faster if rows are consecutive
(remove also returns the number of rows removed if a table)
4. added Term class, refactored Selection (this is backwards compatible)
Term is a concise way of specifying conditions for queries, e.g.
```
Term(dict(field = 'index', op = '>', value = '20121114'))
Term('index', '20121114')
Term('index', '>', '20121114')
Term('index', ['20121114','20121114'])
Term('index', datetime(2012,11,14))
Term('index>20121114')
added alias to the Term class; you can specify the nomial indexers
(e.g. index in DataFrame, major_axis/minor_axis or alias in Panel)
```
this should close GH #1996
5. added Col class to manage the column conversions
6. BUG: added min_itemsize parameter and checks in pytables to allow setting of indexer columns minimum size (current implemenation will truncate indexer columns that are too long in a subsequent append - loss of info)
7. added indexing support via method create_table_index (requires 2.3 in PyTables)
btw now works quite well as Int64 indicies are used as opposed to the Time64Col which has a bug); includes a check on the pytables version requirement
this should close GH #698
8. signficantlly updated docs for pytables to reflect all changes; added docs for Table sections
9. BUG: a store would fail if appending but the put had not been done before (see test_append)
this the result of incompatibility testing on the index_kind
10. BUG: minor change to select and remove: require a table ONLY if where is also provided (and not None)
all tests pass; tests added for new features
I have some implementation changes to make to Tables to make writing quite a bit faster
but will do in a future commit
| https://api.github.com/repos/pandas-dev/pandas/pulls/2346 | 2012-11-24T20:47:44Z | 2012-11-28T00:31:20Z | null | 2014-06-19T02:01:31Z |
test_perf, cmdline args and fixes | diff --git a/test_perf.sh b/test_perf.sh
index 5880769dae177..c71f37b233b7a 100755
--- a/test_perf.sh
+++ b/test_perf.sh
@@ -3,10 +3,10 @@
CURDIR=$(pwd)
BASEDIR=$(readlink -f $(dirname $0 ))
-echo "Use vbench to compare HEAD against a known-good baseline."
+echo "Use vbench to compare the performance of one commit against another."
echo "Make sure the python 'vbench' library is installed..\n"
cd "$BASEDIR/vb_suite/"
-python test_perf.py
+python test_perf.py $@
cd "$CURDIR"
diff --git a/vb_suite/test_perf.py b/vb_suite/test_perf.py
index a6534e2d88aaa..0d11f403b7651 100755
--- a/vb_suite/test_perf.py
+++ b/vb_suite/test_perf.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@@ -27,21 +27,38 @@
everything and calculate a ration for the timing information.
7) print the results to the log file and to stdout.
-Known Issues: vbench fails to locate a baseline if HEAD is not a descendent
"""
-import sys
-import shutil
-from pandas import *
-from vbench.api import BenchmarkRunner
-from vbench.db import BenchmarkDB
-from vbench.git import GitRepo
+import shutil
+import os
+import argparse
import tempfile
-from suite import *
-
-BASELINE_COMMIT = 'bdbca8e' # v0.9,1 + regression fix
-LOG_FILE = os.path.abspath(os.path.join(REPO_PATH, 'vb_suite.log'))
+from pandas import DataFrame
+
+DEFAULT_MIN_DURATION = 0.01
+BASELINE_COMMIT = 'bdbca8e3dc' # 9,1 + regression fix # TODO: detect upstream/master
+
+parser = argparse.ArgumentParser(description='Use vbench to generate a report comparing performance between two commits.')
+parser.add_argument('-a', '--auto',
+ help='Execute a run using the defaults for the base and target commits.',
+ action='store_true',
+ default=False)
+parser.add_argument('-b','--base-commit',
+ help='The commit serving as performance baseline (default: %s).' % BASELINE_COMMIT,
+ type=str)
+parser.add_argument('-t','--target-commit',
+ help='The commit to compare against the baseline (default: HEAD).',
+ type=str)
+parser.add_argument('-m', '--min-duration',
+ help='Minimum duration (in ms) of baseline test for inclusion in report (default: %.3f).' % DEFAULT_MIN_DURATION,
+ type=float,
+ default=0.01)
+parser.add_argument('-o', '--output',
+ metavar="<file>",
+ dest='log_file',
+ help='path of file in which to save the report (default: vb_suite.log).')
+args = parser.parse_args()
def get_results_df(db,rev):
"""Takes a git commit hash and returns a Dataframe of benchmark results
@@ -59,26 +76,35 @@ def prprint(s):
print("*** %s"%s)
def main():
+ from vbench.api import BenchmarkRunner
+ from vbench.db import BenchmarkDB
+ from suite import REPO_PATH, BUILD, DB_PATH, PREPARE, dependencies, benchmarks
+
+ if not args.base_commit:
+ args.base_commit = BASELINE_COMMIT
+
+ # GitRepo wants exactly 7 character hash?
+ args.base_commit = args.base_commit[:7]
+ if args.target_commit:
+ args.target_commit = args.target_commit[:7]
+
+ if not args.log_file:
+ args.log_file = os.path.abspath(os.path.join(REPO_PATH, 'vb_suite.log'))
+
TMP_DIR = tempfile.mkdtemp()
prprint("TMP_DIR = %s" % TMP_DIR)
- prprint("LOG_FILE = %s\n" % LOG_FILE)
+ prprint("LOG_FILE = %s\n" % args.log_file)
try:
- logfile = open(LOG_FILE, 'w')
+ logfile = open(args.log_file, 'w')
prprint( "Processing Repo at '%s'..." % REPO_PATH)
- repo = GitRepo(REPO_PATH)
# get hashes of baseline and current head
- h_head = repo.shas[-1]
- h_baseline = BASELINE_COMMIT
prprint( "Opening DB at '%s'...\n" % DB_PATH)
db = BenchmarkDB(DB_PATH)
- prprint( 'Comparing Head [%s] : %s ' % (h_head, repo.messages.get(h_head,"")))
- prprint( 'Against baseline [%s] : %s \n' % (h_baseline,
- repo.messages.get(h_baseline,"")))
prprint("Initializing Runner...")
runner = BenchmarkRunner(benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
@@ -86,6 +112,21 @@ def main():
# run_option='eod', start_date=START_DATE,
module_dependencies=dependencies)
+ repo = runner.repo #(steal the parsed git repo used by runner)
+
+ # ARGH. reparse the repo, not discarding any commits,
+ # and overwrite the previous parse results
+ #prprint ("Slaughtering kittens..." )
+ (repo.shas, repo.messages,
+ repo.timestamps, repo.authors) = _parse_commit_log(REPO_PATH)
+
+ h_head = args.target_commit or repo.shas[-1]
+ h_baseline = args.base_commit
+
+ prprint('Target [%s] : %s\n' % (h_head, repo.messages.get(h_head,"")))
+ prprint('Baseline [%s] : %s\n' % (h_baseline,repo.messages.get(h_baseline,"")))
+
+
prprint ("removing any previous measurements for the commits." )
db.delete_rev_results(h_baseline)
db.delete_rev_results(h_head)
@@ -93,10 +134,10 @@ def main():
# TODO: we could skip this, but we need to make sure all
# results are in the DB, which is a little tricky with
# start dates and so on.
- prprint( "Running benchmarks for baseline commit '%s'" % h_baseline)
+ prprint( "Running benchmarks for baseline [%s]" % h_baseline)
runner._run_and_write_results(h_baseline)
- prprint ("Running benchmarks for current HEAD '%s'" % h_head)
+ prprint ("Running benchmarks for target [%s]" % h_head)
runner._run_and_write_results(h_head)
prprint( 'Processing results...')
@@ -108,26 +149,71 @@ def main():
t_baseline=baseline_res['timing'],
ratio=ratio,
name=baseline_res.name),columns=["t_head","t_baseline","ratio","name"])
- totals = totals.ix[totals.t_head > 0.010] # ignore sub 10micros
+ totals = totals.ix[totals.t_head > args.min_duration] # ignore below threshold
totals = totals.dropna().sort("ratio").set_index('name') # sort in ascending order
s = "\n\nResults:\n" + totals.to_string(float_format=lambda x: "%0.4f" %x) + "\n\n"
- s += "Columns: test_name | head_time [ms] | baseline_time [ms] | ratio\n\n"
- s += "- a Ratio of 1.30 means HEAD is 30% slower then the Baseline.\n\n"
+ s += "Columns: test_name | target_duration [ms] | baseline_duration [ms] | ratio\n\n"
+ s += "- a Ratio of 1.30 means the target commit is 30% slower then the baseline.\n\n"
- s += 'Head [%s] : %s\n' % (h_head, repo.messages.get(h_head,""))
+ s += 'Target [%s] : %s\n' % (h_head, repo.messages.get(h_head,""))
s += 'Baseline [%s] : %s\n\n' % (h_baseline,repo.messages.get(h_baseline,""))
logfile.write(s)
logfile.close()
prprint(s )
- prprint("Results were also written to the logfile at '%s'\n" % LOG_FILE)
+ prprint("Results were also written to the logfile at '%s'\n" % args.log_file)
finally:
# print("Disposing of TMP_DIR: %s" % TMP_DIR)
shutil.rmtree(TMP_DIR)
logfile.close()
+
+# hack , vbench.git ignores some commits, but we
+# need to be able to reference any commit.
+# modified from vbench.git
+def _parse_commit_log(repo_path):
+ from vbench.git import parser, _convert_timezones
+ from pandas import Series
+ git_cmd = 'git --git-dir=%s/.git --work-tree=%s ' % (repo_path, repo_path)
+ githist = git_cmd + ('log --graph --pretty=format:'
+ '\"::%h::%cd::%s::%an\" > githist.txt')
+ os.system(githist)
+ githist = open('githist.txt').read()
+ os.remove('githist.txt')
+
+ shas = []
+ timestamps = []
+ messages = []
+ authors = []
+ for line in githist.split('\n'):
+ if '*' not in line.split("::")[0]: # skip non-commit lines
+ continue
+
+ _, sha, stamp, message, author = line.split('::', 4)
+
+ # parse timestamp into datetime object
+ stamp = parser.parse(stamp)
+
+ shas.append(sha)
+ timestamps.append(stamp)
+ messages.append(message)
+ authors.append(author)
+
+ # to UTC for now
+ timestamps = _convert_timezones(timestamps)
+
+ shas = Series(shas, timestamps)
+ messages = Series(messages, shas)
+ timestamps = Series(timestamps, shas)
+ authors = Series(authors, shas)
+ return shas[::-1], messages[::-1], timestamps[::-1], authors[::-1]
+
+
if __name__ == '__main__':
- main()
+ if not args.auto and not args.base_commit and not args.target_commit:
+ parser.print_help()
+ else:
+ main()
| https://api.github.com/repos/pandas-dev/pandas/pulls/2344 | 2012-11-24T16:18:31Z | 2012-11-25T04:34:49Z | null | 2014-07-08T10:24:39Z | |
updated docs for indexing to incorporate where and mask | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index a77e2c928abfa..78bbd34b310ae 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -231,22 +231,64 @@ Note, with the :ref:`advanced indexing <indexing.advanced>` ``ix`` method, you
may select along more than one axis using boolean vectors combined with other
indexing expressions.
-Indexing a DataFrame with a boolean DataFrame
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Where and Masking
+~~~~~~~~~~~~~~~~~
-You may wish to set values on a DataFrame based on some boolean criteria
-derived from itself or another DataFrame or set of DataFrames. This can be done
-intuitively like so:
+Selecting values from a Series with a boolean vector in the *[]*, returns a subset of the rows.
+The method `where` allows selection that preserves the original data shape (and is a copy).
.. ipython:: python
+ # return only the selected rows
+ s[s > 0]
+
+ # return a Series of the same shape as the original
+ s.where(s > 0)
+
+Selecting values from a DataFrame with a boolean critierion in the *[]*, that is the same shape as
+the original DataFrame, returns a similary sized DataFrame (and is a copy). `where` is used under the hood as the implementation.
+
+.. ipython:: python
+
+ # return a DataFrame of the same shape as the original
+ # this is equiavalent to `df.where(df < 0)`
+ df[df < 0]
+
+In addition, `where` takes an optional `other` argument for replacement of values where the
+condition is False, in the returned copy.
+
+.. ipython:: python
+
+ df.where(df < 0, -df)
+
+You may wish to set values based on some boolean criteria.
+This can be done intuitively like so:
+
+.. ipython:: python
+
+ s2 = s.copy()
+ s2[s2 < 0] = 0
+ s2
+
df2 = df.copy()
- df2 < 0
df2[df2 < 0] = 0
df2
-Note that such an operation requires that the boolean DataFrame is indexed
-exactly the same.
+Furthermore, `where` aligns the input boolean condition (ndarray or DataFrame), such that partial selection
+with setting is possible. This is analagous to partial setting via `.ix` (but on the contents rather than the axis labels)
+
+.. ipython:: python
+
+ df2 = df.copy()
+ df2[ df2[1:4] > 0 ] = 3
+ df2
+
+`mask` is the inverse boolean operation of `where`.
+
+.. ipython:: python
+
+ s.mask(s >= 0)
+ df.mask(df >= 0)
Take Methods
| updated docs in the indexing section for new Series/DataFrame methods where and mask
| https://api.github.com/repos/pandas-dev/pandas/pulls/2343 | 2012-11-24T15:05:53Z | 2012-11-24T22:16:02Z | null | 2012-11-24T22:16:02Z |
Update doc/source/r_interface.rst | diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
index 88f4810109936..d375b3da38d82 100644
--- a/doc/source/r_interface.rst
+++ b/doc/source/r_interface.rst
@@ -15,10 +15,14 @@ rpy2 / R interface
If your computer has R and rpy2 (> 2.2) installed (which will be left to the
reader), you will be able to leverage the below functionality. On Windows,
doing this is quite an ordeal at the moment, but users on Unix-like systems
-should find it quite easy. rpy2 evolves in time and the current interface is
-designed for the 2.2.x series, and we recommend to use over other series
-unless you are prepared to fix parts of the code. Released packages are available
-in PyPi, but should the latest code in the 2.2.x series be wanted it can be obtained with:
+should find it quite easy. rpy2 evolves in time, and is currently reaching
+its release 2.3, while the current interface is
+designed for the 2.2.x series. We recommend to use 2.2.x over other series
+unless you are prepared to fix parts of the code, yet the rpy2-2.3.0
+introduces improvements such as a better R-Python bridge memory management
+layer so I might be a good idea to bite the bullet and submit patches for
+the few minor differences that need to be fixed.
+
::
| rpy2-2.3.0 is becoming the current release.
Changes to the API are relatively few.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2339 | 2012-11-24T02:17:59Z | 2012-11-25T05:33:38Z | null | 2012-11-25T05:33:38Z |
Where/mask methods for Series | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index a77e2c928abfa..6eb141930f274 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -231,22 +231,49 @@ Note, with the :ref:`advanced indexing <indexing.advanced>` ``ix`` method, you
may select along more than one axis using boolean vectors combined with other
indexing expressions.
-Indexing a DataFrame with a boolean DataFrame
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Where and Masking
+~~~~~~~~~~~~~~~~~
-You may wish to set values on a DataFrame based on some boolean criteria
-derived from itself or another DataFrame or set of DataFrames. This can be done
-intuitively like so:
+Selecting values from a DataFrame is accomplished in a similar manner to a Series.
+You index the Frame with a boolean DataFrame of the same size. This is accomplished
+via the method `where` under the hood. The returned view of the DataFrame is the
+same size as the original.
+
+.. ipython:: python
+
+ df < 0
+ df[df < 0]
+
+In addition, `where` takes an optional `other` argument for replacement in the
+returned copy.
+
+.. ipython:: python
+
+ df.where(df < 0, -df)
+
+You may wish to set values on a DataFrame based on some boolean criteria.
+This can be done intuitively like so:
.. ipython:: python
df2 = df.copy()
- df2 < 0
df2[df2 < 0] = 0
df2
-Note that such an operation requires that the boolean DataFrame is indexed
-exactly the same.
+Furthermore, `where` aligns the input boolean condition (ndarray or DataFrame), such that partial selection
+with setting is possible. This is analagous to partial setting via `.ix` (but on the contents rather than the axis labels)
+
+.. ipython:: python
+
+ df2 = df.copy()
+ df2[ df2[1:4] > 0 ] = 3
+ df2
+
+`DataFrame.mask` is the inverse boolean operation of `where`.
+
+.. ipython:: python
+
+ df.mask(df >= 0)
Take Methods
diff --git a/doc/source/io.rst b/doc/source/io.rst
index f74120ad7ef57..76bd123acf8aa 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1,3 +1,4 @@
+
.. _io:
.. currentmodule:: pandas
@@ -812,8 +813,114 @@ In a current or later Python session, you can retrieve stored objects:
os.remove('store.h5')
-.. Storing in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~
+Storing in Table format
+~~~~~~~~~~~~~~~~~~~~~~~
+
+```HDFStore``` supports another *PyTables* format on disk, the *table* format. Conceptually a *table* is shaped
+very much like a DataFrame, with rows and columns. A *table* may be appended to in the same or other sessions.
+In addition, delete, query type operations are supported. You can create an index with ```create_table_index```
+after data is already in the table (this may become automatic in the future or an option on appending/putting a *table*).
+
+.. ipython:: python
+ :suppress:
+ :okexcept:
+
+ os.remove('store.h5')
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ df1 = df[0:4]
+ df2 = df[4:]
+ store.append('df', df1)
+ store.append('df', df2)
+
+ store.select('df')
+
+ store.create_table_index('df')
+ store.handle.root.df.table
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
+
+
+Querying objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`select` and `delete` operations have an optional criteria that can be specified to select/delete only
+a subset of the data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
+
+A query is specified using the `Term` class under the hood.
+
+ - 'index' refers to the index of a DataFrame
+ - 'major_axis' and 'minor_axis' are supported indexers of the Panel
+
+The following are all valid terms.
+
+.. code-block:: python
+
+ dict(field = 'index', op = '>', value = '20121114')
+ ('index', '>', '20121114')
+ 'index>20121114'
+ ('index', '>', datetime(2012,11,14))
+
+ ('index', ['20121114','20121115'])
+ ('major', Timestamp('2012/11/14'))
+ ('minor_axis', ['A','B'])
+
+Queries are built up (currently only *and* is supported) using a list. An example query for a panel might be specified as follows:
+
+.. code-block:: python
+
+ ['major_axis>20121114', ('minor_axis', ['A','B']) ]
+
+This is roughly translated to: major_axis must be greater than the date 20121114 and the minor_axis must be A or B
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ store.append('wp',wp)
+ store.select('wp',[ 'major_axis>20000102', ('minor_axis', ['A','B']) ])
+
+Delete objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. ipython:: python
+
+ store.remove('wp', 'index>20000102' )
+ store.select('wp')
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
-.. Querying objects stored in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Notes & Caveats
+~~~~~~~~~~~~~~~
+
+ - Selection by items (the top level panel dimension) is not possible; you always get all of the items in the returned Panel
+ - Currently the sizes of the *column* items are governed by the first table creation
+ (this should be specified at creation time or use the largest available) - otherwise subsequent appends can truncate the column names
+ - Mixed-Type Panels/DataFrames are not currently supported - coming soon!
+ - Once a *table* is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can be appended
+ - Appending to an already existing table will raise an exception if any of the indexers (index,major_axis or minor_axis) are strings
+ and they would be truncated because the column size is too small (you can pass ```min_itemsize``` to append to provide a larger fixed size
+ to compensate)
+
+Performance
+~~~~~~~~~~~
+
+ - To delete a lot of data, it is sometimes better to erase the table and rewrite it (after say an indexing operation)
+ *PyTables* tends to increase the file size with deletions
+ - In general it is best to store Panels with the most frequently selected dimension in the minor axis and a time/date like dimension in the major axis
+ but this is not required, major_axis and minor_axis can be any valid Panel index
+ - No dimensions are currently indexed automagically (in the *PyTables* sense); these require an explict call to ```create_table_index```
+ - *Tables* offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
+ use the pytables utilities ptrepack to rewrite the file (and also can change compression methods)
+ - Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index a798915cb9681..d882a147f5395 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -562,6 +562,44 @@ def _get_values(self, indexer):
except Exception:
return self.values[indexer]
+ def where(self, cond, other=nan, inplace=False):
+ """
+ Return a Series where cond is True; otherwise values are from other
+
+ Parameters
+ ----------
+ cond: boolean Series or array
+ other: scalar or Series
+
+ Returns
+ -------
+ wh: Series
+ """
+ if not hasattr(cond, 'shape'):
+ raise ValueError('where requires an ndarray like object for its '
+ 'condition')
+
+ if inplace:
+ self._set_with(~cond, other)
+ return self
+
+ return self._get_values(cond).reindex_like(self).fillna(other)
+
+ def mask(self, cond):
+ """
+ Returns copy of self whose values are replaced with nan if the
+ inverted condition is True
+
+ Parameters
+ ----------
+ cond: boolean Series or array
+
+ Returns
+ -------
+ wh: Series
+ """
+ return self.where(~cond, nan)
+
def __setitem__(self, key, value):
try:
try:
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index af480b5a6457f..bc8967973808e 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -7,6 +7,7 @@
from datetime import datetime, date
import time
+import re
import numpy as np
from pandas import (
@@ -67,12 +68,20 @@
# oh the troubles to reduce import time
_table_mod = None
+_table_supports_index = True
def _tables():
global _table_mod
+ global _table_supports_index
if _table_mod is None:
import tables
_table_mod = tables
+
+ # version requirements
+ major, minor, subv = tables.__version__.split('.')
+ if major >= 2 and minor >= 3:
+ _table_supports_index = True
+
return _table_mod
@@ -321,7 +330,7 @@ def select(self, key, where=None):
return self._read_group(group, where)
def put(self, key, value, table=False, append=False,
- compression=None):
+ compression=None, **kwargs):
"""
Store object in HDFStore
@@ -342,7 +351,7 @@ def put(self, key, value, table=False, append=False,
be used.
"""
self._write_to_group(key, value, table=table, append=append,
- comp=compression)
+ comp=compression, **kwargs)
def _get_handler(self, op, kind):
return getattr(self, '_%s_%s' % (op, kind))
@@ -370,7 +379,7 @@ def remove(self, key, where=None):
if group is not None:
self._delete_from_table(group, where)
- def append(self, key, value):
+ def append(self, key, value, **kwargs):
"""
Append to Table in file. Node must already exist and be Table
format.
@@ -385,10 +394,58 @@ def append(self, key, value):
Does *not* check if data being appended overlaps with existing
data in the table, so be careful
"""
- self._write_to_group(key, value, table=True, append=True)
+ self._write_to_group(key, value, table=True, append=True, **kwargs)
+
+ def create_table_index(self, key, columns = None, optlevel = None, kind = None):
+ """
+ Create a pytables index on the specified columns
+ note: cannot index Time64Col() currently; PyTables must be >= 2.3.1
+
+
+ Paramaters
+ ----------
+ key : object (the node to index)
+ columns : None or list_like (the columns to index - currently supports index/column)
+ optlevel: optimization level (defaults to 6)
+ kind : kind of index (defaults to 'medium')
+
+ Exceptions
+ ----------
+ raises if the node is not a table
+
+ """
+
+ # version requirements
+ if not _table_supports_index:
+ raise("PyTables >= 2.3 is required for table indexing")
+
+ group = getattr(self.handle.root, key, None)
+ if group is None: return
+
+ if not _is_table_type(group):
+ raise Exception("cannot create table index on a non-table")
+
+ table = getattr(group, 'table', None)
+ if table is None: return
+
+ if columns is None:
+ columns = ['index']
+ if not isinstance(columns, (tuple,list)):
+ columns = [ columns ]
+
+ kw = dict()
+ if optlevel is not None:
+ kw['optlevel'] = optlevel
+ if kind is not None:
+ kw['kind'] = kind
+
+ for c in columns:
+ v = getattr(table.cols,c,None)
+ if v is not None and not v.is_indexed:
+ v.createIndex(**kw)
def _write_to_group(self, key, value, table=False, append=False,
- comp=None):
+ comp=None, **kwargs):
root = self.handle.root
if key not in root._v_children:
group = self.handle.createGroup(root, key)
@@ -400,7 +457,7 @@ def _write_to_group(self, key, value, table=False, append=False,
kind = '%s_table' % kind
handler = self._get_handler(op='write', kind=kind)
wrapper = lambda value: handler(group, value, append=append,
- comp=comp)
+ comp=comp, **kwargs)
else:
if append:
raise ValueError('Can only append to Tables')
@@ -530,7 +587,7 @@ def _read_block_manager(self, group):
return BlockManager(blocks, axes)
- def _write_frame_table(self, group, df, append=False, comp=None):
+ def _write_frame_table(self, group, df, append=False, comp=None, **kwargs):
mat = df.values
values = mat.reshape((1,) + mat.shape)
@@ -540,7 +597,7 @@ def _write_frame_table(self, group, df, append=False, comp=None):
self._write_table(group, items=['value'],
index=df.index, columns=df.columns,
- values=values, append=append, compression=comp)
+ values=values, append=append, compression=comp, **kwargs)
def _write_wide(self, group, panel):
panel._consolidate_inplace()
@@ -549,10 +606,10 @@ def _write_wide(self, group, panel):
def _read_wide(self, group, where=None):
return Panel(self._read_block_manager(group))
- def _write_wide_table(self, group, panel, append=False, comp=None):
+ def _write_wide_table(self, group, panel, append=False, comp=None, **kwargs):
self._write_table(group, items=panel.items, index=panel.major_axis,
columns=panel.minor_axis, values=panel.values,
- append=append, compression=comp)
+ append=append, compression=comp, **kwargs)
def _read_wide_table(self, group, where=None):
return self._read_panel_table(group, where)
@@ -569,10 +626,10 @@ def _write_index(self, group, key, index):
self._write_sparse_intindex(group, key, index)
else:
setattr(group._v_attrs, '%s_variety' % key, 'regular')
- converted, kind, _ = _convert_index(index)
- self._write_array(group, key, converted)
+ converted = _convert_index(index).set_name('index')
+ self._write_array(group, key, converted.values)
node = getattr(group, key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = converted.kind
node._v_attrs.name = index.name
if isinstance(index, (DatetimeIndex, PeriodIndex)):
@@ -629,11 +686,11 @@ def _write_multi_index(self, group, key, index):
index.labels,
index.names)):
# write the level
- conv_level, kind, _ = _convert_index(lev)
level_key = '%s_level%d' % (key, i)
- self._write_array(group, level_key, conv_level)
+ conv_level = _convert_index(lev).set_name(level_key)
+ self._write_array(group, level_key, conv_level.values)
node = getattr(group, level_key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = conv_level.kind
node._v_attrs.name = name
# write the name
@@ -738,22 +795,28 @@ def _write_array(self, group, key, value):
getattr(group, key)._v_attrs.transposed = transposed
def _write_table(self, group, items=None, index=None, columns=None,
- values=None, append=False, compression=None):
+ values=None, append=False, compression=None,
+ min_itemsize = None, **kwargs):
""" need to check for conform to the existing table:
e.g. columns should match """
- # create dict of types
- index_converted, index_kind, index_t = _convert_index(index)
- columns_converted, cols_kind, col_t = _convert_index(columns)
+
+ # create Col types
+ index_converted = _convert_index(index).set_name('index')
+ columns_converted = _convert_index(columns).set_name('column')
# create the table if it doesn't exist (or get it if it does)
if not append:
if 'table' in group:
self.handle.removeNode(group, 'table')
+ else:
+ # check that we are not truncating on our indicies
+ index_converted.maybe_set(min_itemsize = min_itemsize)
+ columns_converted.maybe_set(min_itemsize = min_itemsize)
if 'table' not in group:
# create the table
- desc = {'index': index_t,
- 'column': col_t,
+ desc = {'index' : index_converted.typ,
+ 'column': columns_converted.typ,
'values': _tables().FloatCol(shape=(len(values)))}
options = {'name': 'table',
@@ -775,16 +838,20 @@ def _write_table(self, group, items=None, index=None, columns=None,
# the table must already exist
table = getattr(group, 'table', None)
+ # check that we are not truncating on our indicies
+ index_converted.validate(table)
+ columns_converted.validate(table)
+
# check for backwards incompatibility
if append:
- existing_kind = table._v_attrs.index_kind
- if existing_kind != index_kind:
+ existing_kind = getattr(table._v_attrs,'index_kind',None)
+ if existing_kind is not None and existing_kind != index_converted.kind:
raise TypeError("incompatible kind in index [%s - %s]" %
- (existing_kind, index_kind))
+ (existing_kind, index_converted.kind))
# add kinds
- table._v_attrs.index_kind = index_kind
- table._v_attrs.columns_kind = cols_kind
+ table._v_attrs.index_kind = index_converted.kind
+ table._v_attrs.columns_kind = columns_converted.kind
if append:
existing_fields = getattr(table._v_attrs, 'fields', None)
if (existing_fields is not None and
@@ -916,35 +983,90 @@ def _read_panel_table(self, group, where=None):
lp = DataFrame(new_values, index=new_index, columns=lp.columns)
wp = lp.to_panel()
- if sel.column_filter:
- new_minor = sorted(set(wp.minor_axis) & sel.column_filter)
+ if sel.filter:
+ new_minor = sorted(set(wp.minor_axis) & sel.filter)
wp = wp.reindex(minor=new_minor)
return wp
- def _delete_from_table(self, group, where = None):
+ def _delete_from_table(self, group, where):
+ """ delete rows from a group where condition is True """
table = getattr(group, 'table')
# create the selection
- s = Selection(table, where, table._v_attrs.index_kind)
+ s = Selection(table,where,table._v_attrs.index_kind)
s.select_coords()
# delete the rows in reverse order
- l = list(s.values)
- l.reverse()
- for c in l:
- table.removeRows(c)
- self.handle.flush()
- return len(s.values)
+ l = list(s.values)
+ ln = len(l)
+
+ if ln:
+
+ # if we can do a consecutive removal - do it!
+ if l[0]+ln-1 == l[-1]:
+ table.removeRows(start = l[0], stop = l[-1]+1)
+
+ # one by one
+ else:
+ l.reverse()
+ for c in l:
+ table.removeRows(c)
+
+ self.handle.flush()
+ # return the number of rows removed
+ return ln
+
+class Col(object):
+ """ a column description class
+
+ Parameters
+ ----------
+
+ values : the ndarray like converted values
+ kind : a string description of this type
+ typ : the pytables type
+
+ """
+
+ def __init__(self, values, kind, typ, itemsize = None, **kwargs):
+ self.values = values
+ self.kind = kind
+ self.typ = typ
+ self.itemsize = itemsize
+ self.name = None
+
+ def set_name(self, n):
+ self.name = n
+ return self
+
+ def __iter__(self):
+ return iter(self.values)
+
+ def maybe_set(self, min_itemsize = None, **kwargs):
+ """ maybe set a string col itemsize """
+ if self.kind == 'string' and min_itemsize is not None:
+ if self.typ.itemsize < min_itemsize:
+ self.typ = _tables().StringCol(itemsize = min_itemsize, pos = getattr(self.typ,'pos',None))
+
+ def validate(self, table, **kwargs):
+ """ validate this column for string truncation (or reset to the max size) """
+ if self.kind == 'string':
+
+ # the current column name
+ t = getattr(table.description,self.name,None)
+ if t is not None:
+ if t.itemsize < self.itemsize:
+ raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!" % (self.name,self.itemsize,t.itemsize))
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif isinstance(index, (Int64Index, PeriodIndex)):
atom = _tables().Int64Col()
- return index.values, 'integer', atom
+ return Col(index.values, 'integer', atom)
if isinstance(index, MultiIndex):
raise Exception('MultiIndex not supported here!')
@@ -955,33 +1077,33 @@ def _convert_index(index):
if inferred_type == 'datetime64':
converted = values.view('i8')
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif inferred_type == 'datetime':
converted = np.array([(time.mktime(v.timetuple()) +
v.microsecond / 1E6) for v in values],
dtype=np.float64)
- return converted, 'datetime', _tables().Time64Col()
+ return Col(converted, 'datetime', _tables().Time64Col())
elif inferred_type == 'date':
converted = np.array([time.mktime(v.timetuple()) for v in values],
dtype=np.int32)
- return converted, 'date', _tables().Time32Col()
+ return Col(converted, 'date', _tables().Time32Col())
elif inferred_type == 'string':
converted = np.array(list(values), dtype=np.str_)
itemsize = converted.dtype.itemsize
- return converted, 'string', _tables().StringCol(itemsize)
+ return Col(converted, 'string', _tables().StringCol(itemsize), itemsize = itemsize)
elif inferred_type == 'unicode':
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
elif inferred_type == 'integer':
# take a guess for now, hope the values fit
atom = _tables().Int64Col()
- return np.asarray(values, dtype=np.int64), 'integer', atom
+ return Col(np.asarray(values, dtype=np.int64), 'integer', atom)
elif inferred_type == 'floating':
atom = _tables().Float64Col()
- return np.asarray(values, dtype=np.float64), 'float', atom
+ return Col(np.asarray(values, dtype=np.float64), 'float', atom)
else: # pragma: no cover
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
def _read_array(group, key):
@@ -1088,6 +1210,151 @@ def _alias_to_class(alias):
return _reverse_index_map.get(alias, Index)
+class Term(object):
+ """ create a term object that holds a field, op, and value
+
+ Parameters
+ ----------
+ field : dict, string term expression, or the field to operate (must be a valid index/column type of DataFrame/Panel)
+ op : a valid op (defaults to '=') (optional)
+ >, >=, <, <=, =, != (not equal) are allowed
+ value : a value or list of values (required)
+
+ Returns
+ -------
+ a Term object
+
+ Examples
+ --------
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('major>20121114')
+ Term('minor', ['A','B'])
+
+ """
+
+ _ops = ['<','<=','>','>=','=','!=']
+ _search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
+ _index = ['index','major_axis','major']
+ _column = ['column','minor_axis','minor']
+
+ def __init__(self, field, op = None, value = None, index_kind = None):
+ self.field = None
+ self.op = None
+ self.value = None
+ self.index_kind = index_kind
+ self.filter = None
+ self.condition = None
+
+ # unpack lists/tuples in field
+ if isinstance(field,(tuple,list)):
+ f = field
+ field = f[0]
+ if len(f) > 1:
+ op = f[1]
+ if len(f) > 2:
+ value = f[2]
+
+ # backwards compatible
+ if isinstance(field, dict):
+ self.field = field.get('field')
+ self.op = field.get('op') or '='
+ self.value = field.get('value')
+
+ # passed a term
+ elif isinstance(field,Term):
+ self.field = field.field
+ self.op = field.op
+ self.value = field.value
+
+ # a string expression (or just the field)
+ elif isinstance(field,basestring):
+
+ # is a term is passed
+ s = self._search.match(field)
+ if s is not None:
+ self.field = s.group('field')
+ self.op = s.group('op')
+ self.value = s.group('value')
+
+ else:
+ self.field = field
+
+ # is an op passed?
+ if isinstance(op, basestring) and op in self._ops:
+ self.op = op
+ self.value = value
+ else:
+ self.op = '='
+ self.value = op
+
+ else:
+ raise Exception("Term does not understand the supplied field [%s]" % field)
+
+ # we have valid fields
+ if self.field is None or self.op is None or self.value is None:
+ raise Exception("Could not create this term [%s]" % str(self))
+
+ # valid field name
+ if self.field in self._index:
+ self.field = 'index'
+ elif self.field in self._column:
+ self.field = 'column'
+ else:
+ raise Exception("field is not a valid index/column for this term [%s]" % str(self))
+
+ # we have valid conditions
+ if self.op in ['>','>=','<','<=']:
+ if hasattr(self.value,'__iter__') and len(self.value) > 1:
+ raise Exception("an inequality condition cannot have multiple values [%s]" % str(self))
+
+ if not hasattr(self.value,'__iter__'):
+ self.value = [ self.value ]
+
+ self.eval()
+
+ def __str__(self):
+ return "field->%s,op->%s,value->%s" % (self.field,self.op,self.value)
+
+ __repr__ = __str__
+
+ def eval(self):
+ """ set the numexpr expression for this term """
+
+ # convert values
+ values = [ self.convert_value(v) for v in self.value ]
+
+ # equality conditions
+ if self.op in ['=','!=']:
+
+ # too many values to create the expression?
+ if len(values) <= 61:
+ self.condition = "(%s)" % ' | '.join([ "(%s == %s)" % (self.field,v[0]) for v in values])
+
+ # use a filter after reading
+ else:
+ self.filter = set([ v[1] for v in values ])
+
+ else:
+
+ self.condition = '(%s %s %s)' % (self.field, self.op, values[0][0])
+
+ def convert_value(self, v):
+
+ if self.field == 'index':
+ if self.index_kind == 'datetime64' :
+ return [lib.Timestamp(v).value, None]
+ elif isinstance(v, datetime):
+ return [time.mktime(v.timetuple()), None]
+ elif not isinstance(v, basestring):
+ return [str(v), None]
+
+ # string quoting
+ return ["'" + v + "'", v]
+
class Selection(object):
"""
Carries out a selection operation on a tables.Table object.
@@ -1095,72 +1362,43 @@ class Selection(object):
Parameters
----------
table : tables.Table
- where : list of dicts of the following form
-
- Comparison op
- {'field' : 'index',
- 'op' : '>=',
- 'value' : value}
+ where : list of Terms (or convertable to)
- Match single value
- {'field' : 'index',
- 'value' : v1}
-
- Match a set of values
- {'field' : 'index',
- 'value' : [v1, v2, v3]}
"""
def __init__(self, table, where=None, index_kind=None):
- self.table = table
- self.where = where
+ self.table = table
+ self.where = where
self.index_kind = index_kind
- self.column_filter = None
- self.the_condition = None
- self.conditions = []
- self.values = None
- if where:
- self.generate(where)
+ self.values = None
+ self.condition = None
+ self.filter = None
+ self.terms = self.generate(where)
+
+ # create the numexpr & the filter
+ if self.terms:
+ conds = [ t.condition for t in self.terms if t.condition is not None ]
+ if len(conds):
+ self.condition = "(%s)" % ' & '.join(conds)
+ self.filter = set()
+ for t in self.terms:
+ if t.filter is not None:
+ self.filter |= t.filter
def generate(self, where):
- # and condictions
- for c in where:
- op = c.get('op', None)
- value = c['value']
- field = c['field']
-
- if field == 'index' and self.index_kind == 'datetime64':
- val = lib.Timestamp(value).value
- self.conditions.append('(%s %s %s)' % (field, op, val))
- elif field == 'index' and isinstance(value, datetime):
- value = time.mktime(value.timetuple())
- self.conditions.append('(%s %s %s)' % (field, op, value))
- else:
- self.generate_multiple_conditions(op, value, field)
+ """ generate and return the terms """
+ if where is None: return None
- if len(self.conditions):
- self.the_condition = '(' + ' & '.join(self.conditions) + ')'
+ if not isinstance(where, (list,tuple)):
+ where = [ where ]
- def generate_multiple_conditions(self, op, value, field):
-
- if op and op == 'in' or isinstance(value, (list, np.ndarray)):
- if len(value) <= 61:
- l = '(' + ' | '.join([ "(%s == '%s')" % (field, v)
- for v in value]) + ')'
- self.conditions.append(l)
- else:
- self.column_filter = set(value)
- else:
- if op is None:
- op = '=='
- self.conditions.append('(%s %s "%s")' % (field, op, value))
+ return [ Term(c, index_kind = self.index_kind) for c in where ]
def select(self):
"""
generate the selection
"""
- if self.the_condition:
- self.values = self.table.readWhere(self.the_condition)
-
+ if self.condition is not None:
+ self.values = self.table.readWhere(self.condition)
else:
self.values = self.table.read()
@@ -1168,7 +1406,7 @@ def select_coords(self):
"""
generate the selection
"""
- self.values = self.table.getWhereList(self.the_condition)
+ self.values = self.table.getWhereList(self.condition)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 9442f274a7810..30bc9d4ed8ba1 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -10,10 +10,11 @@
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, Index)
-from pandas.io.pytables import HDFStore, get_store
+from pandas.io.pytables import HDFStore, get_store, Term
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
+from pandas import concat
try:
import tables
@@ -142,10 +143,48 @@ def test_put_integer(self):
def test_append(self):
df = tm.makeTimeDataFrame()
- self.store.put('c', df[:10], table=True)
+ self.store.append('c', df[:10])
self.store.append('c', df[10:])
tm.assert_frame_equal(self.store['c'], df)
+ self.store.put('d', df[:10], table=True)
+ self.store.append('d', df[10:])
+ tm.assert_frame_equal(self.store['d'], df)
+
+ def test_append_with_strings(self):
+ wp = tm.makePanel()
+ wp2 = wp.rename_axis(dict([ (x,"%s_extra" % x) for x in wp.minor_axis ]), axis = 2)
+
+ self.store.append('s1', wp, min_itemsize = 20)
+ self.store.append('s1', wp2)
+ expected = concat([ wp, wp2], axis = 2)
+ expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
+ tm.assert_panel_equal(self.store['s1'], expected)
+
+ # test truncation of bigger strings
+ self.store.append('s2', wp)
+ self.assertRaises(Exception, self.store.append, 's2', wp2)
+
+ def test_create_table_index(self):
+ wp = tm.makePanel()
+ self.store.append('p5', wp)
+ self.store.create_table_index('p5')
+
+ assert(self.store.handle.root.p5.table.cols.index.is_indexed == True)
+ assert(self.store.handle.root.p5.table.cols.column.is_indexed == False)
+
+ df = tm.makeTimeDataFrame()
+ self.store.append('f', df[:10])
+ self.store.append('f', df[10:])
+ self.store.create_table_index('f')
+
+ # create twice
+ self.store.create_table_index('f')
+
+ # try to index a non-table
+ self.store.put('f2', df)
+ self.assertRaises(Exception, self.store.create_table_index, 'f2')
+
def test_append_diff_item_order(self):
wp = tm.makePanel()
wp1 = wp.ix[:, :10, :]
@@ -177,11 +216,7 @@ def test_remove(self):
self.assertEquals(len(self.store), 0)
def test_remove_where_not_exist(self):
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : 'foo'
- }
+ crit1 = Term('index','>','foo')
self.store.remove('a', where=[crit1])
def test_remove_crit(self):
@@ -189,21 +224,60 @@ def test_remove_crit(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = Term('index','>',date)
+ crit2 = Term('column',['A', 'D'])
self.store.remove('wp', where=[crit1])
self.store.remove('wp', where=[crit2])
result = self.store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
tm.assert_panel_equal(result, expected)
+ # test non-consecutive row removal
+ wp = tm.makePanel()
+ self.store.put('wp2', wp, table=True)
+
+ date1 = wp.major_axis[1:3]
+ date2 = wp.major_axis[5]
+ date3 = [wp.major_axis[7],wp.major_axis[9]]
+
+ crit1 = Term('index',date1)
+ crit2 = Term('index',date2)
+ crit3 = Term('index',date3)
+
+ self.store.remove('wp2', where=[crit1])
+ self.store.remove('wp2', where=[crit2])
+ self.store.remove('wp2', where=[crit3])
+ result = self.store['wp2']
+
+ ma = list(wp.major_axis)
+ for d in date1:
+ ma.remove(d)
+ ma.remove(date2)
+ for d in date3:
+ ma.remove(d)
+ expected = wp.reindex(major = ma)
+ tm.assert_panel_equal(result, expected)
+
+ def test_terms(self):
+
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('index>20121114')
+ Term('major>20121114')
+ Term('major_axis>20121114')
+ Term('minor', ['A','B'])
+ Term('minor_axis', ['A','B'])
+ Term('column', ['A','B'])
+
+ self.assertRaises(Exception, Term.__init__)
+ self.assertRaises(Exception, Term.__init__, 'blah')
+ self.assertRaises(Exception, Term.__init__, 'index')
+ self.assertRaises(Exception, Term.__init__, 'index', '==')
+ self.assertRaises(Exception, Term.__init__, 'index', '>', 5)
+
def test_series(self):
s = tm.makeStringSeries()
self._check_roundtrip(s, tm.assert_series_equal)
@@ -528,15 +602,8 @@ def test_panel_select(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column', ['A', 'D'])
result = self.store.select('wp', [crit1, crit2])
expected = wp.truncate(before=date).reindex(minor=['A', 'D'])
@@ -547,19 +614,9 @@ def test_frame_select(self):
self.store.put('frame', df, table=True)
date = df.index[len(df) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
- crit3 = {
- 'field' : 'column',
- 'value' : 'A'
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column',['A', 'D'])
+ crit3 = ('column','A')
result = self.store.select('frame', [crit1, crit2])
expected = df.ix[date:, ['A', 'D']]
@@ -580,10 +637,7 @@ def test_select_filter_corner(self):
df.columns = ['%.3d' % c for c in df.columns]
self.store.put('frame', df, table=True)
- crit = {
- 'field' : 'column',
- 'value' : df.columns[:75]
- }
+ crit = Term('column', df.columns[:75])
result = self.store.select('frame', [crit])
tm.assert_frame_equal(result, df.ix[:, df.columns[:75]])
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 7422c925fd657..a48e66d38b1c4 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -939,6 +939,37 @@ def test_ix_getitem_iterator(self):
result = self.series.ix[idx]
assert_series_equal(result, self.series[:10])
+ def test_where(self):
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.where(cond).dropna()
+ rs2 = s[cond]
+ assert_series_equal(rs, rs2)
+
+ rs = s.where(cond,-s)
+ assert_series_equal(rs, s.abs())
+
+ rs = s.where(cond)
+ assert(s.shape == rs.shape)
+
+ self.assertRaises(ValueError, s.where, 1)
+
+ def test_where_inplace(self):
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.copy()
+ rs.where(cond,inplace=True)
+ assert_series_equal(rs.dropna(), s[cond])
+
+ def test_mask(self):
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.where(cond, np.nan)
+ assert_series_equal(rs, s.mask(~cond))
+
def test_ix_setitem(self):
inds = self.series.index[[3,4,7]]
| add where and mask methods for Series, analagous to DataFrame methods added for 0.9.1
passes all tests
where is equivalent to: `s[cond].reindex_like(s).fillna(other)`
```
In [7]: s = pd.Series(np.random.rand(5))
In [8]: s
Out[8]:
0 0.638664
1 0.574688
2 0.460510
3 0.641840
4 0.044129
In [10]: s[0:2] = -s[0:2]
In [11]: s
Out[11]:
0 -0.638664
1 -0.574688
2 0.460510
3 0.641840
4 0.044129
```
boolean selection
```
In [12]: s[s>0]
Out[12]:
2 0.460510
3 0.641840
4 0.044129
In [13]: s.where(s>0)
Out[13]:
0 NaN
1 NaN
2 0.460510
3 0.641840
4 0.044129
In [14]: s.where(s>0,-s)
Out[14]:
0 0.638664
1 0.574688
2 0.460510
3 0.641840
4 0.044129
In [15]: s.mask(s<=0)
Out[15]:
0 NaN
1 NaN
2 0.460510
3 0.641840
4 0.044129
```
support setting as well (though not used anywhere explicity)
```
In [16]: s2 = s.copy()
In [17]: s2.where(s2>0,inplace=True)
Out[17]:
0 NaN
1 NaN
2 0.460510
3 0.641840
4 0.044129
In [18]: s2
Out[18]:
0 NaN
1 NaN
2 0.460510
3 0.641840
4 0.044129
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2337 | 2012-11-23T21:42:36Z | 2012-11-24T05:20:52Z | null | 2014-06-12T16:27:48Z |
VB: add test for iteritems performance | diff --git a/vb_suite/frame_methods.py b/vb_suite/frame_methods.py
index 08f041d835f76..7cf1fc6cf34c5 100644
--- a/vb_suite/frame_methods.py
+++ b/vb_suite/frame_methods.py
@@ -65,3 +65,14 @@
frame_boolean_row_select = Benchmark('df[bool_arr]', setup,
start_date=datetime(2011, 1, 1))
+
+#----------------------------------------------------------------------
+# iteritems (monitor no-copying behaviour)
+
+setup = common_setup + """
+df = DataFrame(randn(10000, 100))
+"""
+
+# as far back as the earliest test currently in the suite
+frame_iteritems = Benchmark('for name,col in df.iteritems(): pass', setup,
+ start_date=datetime(2010, 6, 1))
| catch the regression noted in #2273 next time.
```
frame_iteritems 24.85 3.12 7.98
Columns: test_name | head_time [ms] | baseline_time [ms] | ratio
- a Ratio of 1.30 means HEAD is 30% slower then the Baseline.
Head [59d318c] : revert
Baseline [bdbca8e] : BUG: fix borked data-copying iteritems performance affecting DataFrame and SparseDataFrame. close #2273
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2335 | 2012-11-23T12:20:14Z | 2012-11-23T21:10:35Z | null | 2012-11-23T21:10:35Z |
Fix seconds parsing #2209 | diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 3405dcb779414..07ff3b88326f1 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -645,8 +645,10 @@ def try_parse_datetime_components(ndarray[object] years, ndarray[object] months,
result = np.empty(n, dtype='O')
for i from 0 <= i < n:
+ msecs = round(float(seconds[i] % 1)*10**6)
result[i] = datetime(int(years[i]), int(months[i]), int(days[i]),
- int(hours[i]), int(minutes[i]), int(seconds[i]))
+ int(hours[i]), int(minutes[i]), int(seconds[i]),
+ int(msecs))
return result
| Creating a milliseconds entry by doing math on the seconds entry:
```
msecs = round(float(seconds[i] % 1)*10**6)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2334 | 2012-11-23T03:26:32Z | 2012-11-30T23:59:17Z | null | 2014-06-24T22:55:32Z |
df.combine_first() with empty frame should still yield unioned index | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f7f296e822e15..c0e0c2a464a1d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3503,15 +3503,18 @@ def combine(self, other, func, fill_value=None):
-------
result : DataFrame
"""
- if other.empty:
- return self.copy()
- if self.empty:
- return other.copy()
+ other_idxlen = len(other.index) # save for compare
this, other = self.align(other, copy=False)
new_index = this.index
+ if other.empty and len(new_index) == len(self.index):
+ return self.copy()
+
+ if self.empty and len(other) == other_idxlen:
+ return other.copy()
+
# sorts if possible
new_columns = this.columns.union(other.columns)
do_fill = fill_value is not None
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index fea84f5a86e36..7f6ee00785501 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -6131,6 +6131,9 @@ def test_combine_first(self):
comb = self.empty.combine_first(self.frame)
assert_frame_equal(comb, self.frame)
+ comb = self.frame.combine_first(DataFrame(index=["faz","boo"]))
+ self.assertTrue("faz" in comb.index)
+
def test_combine_first_mixed_bug(self):
idx = Index(['a','b','c','e'])
ser1 = Series([5.0,-9.0,4.0,100.],index=idx)
| closes #2307
| https://api.github.com/repos/pandas-dev/pandas/pulls/2332 | 2012-11-22T20:29:50Z | 2012-11-24T00:24:50Z | null | 2014-07-14T03:26:39Z |
add check for endianess in major constructors | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 30f6b66a2b7d3..b5529eea11b0a 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -604,6 +604,15 @@ def ensure_float(arr):
return arr
+def assert_ndarray_endianess(ar):
+ import sys
+ little_endian = (sys.byteorder == 'little')
+ if not isinstance(ar,np.ndarray):
+ return
+
+ if ((ar.dtype.byteorder == '>' and little_endian) or
+ (ar.dtype.byteorder == '<' and not little_endian)):
+ raise ValueError(u"Non-native byte order not supported")
def _mut_exclusive(arg1, arg2):
if arg1 is not None and arg2 is not None:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f7f296e822e15..0379c5f65d737 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -370,6 +370,9 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
if data is None:
data = {}
+ for x in (data, index, columns):
+ com.assert_ndarray_endianess(x)
+
if isinstance(data, DataFrame):
data = data._data
diff --git a/pandas/core/index.py b/pandas/core/index.py
index b7792309f66ff..8ba65fdd2f978 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -81,6 +81,9 @@ class Index(np.ndarray):
def __new__(cls, data, dtype=None, copy=False, name=None):
if isinstance(data, np.ndarray):
+
+ com.assert_ndarray_endianess(data)
+
if issubclass(data.dtype.type, np.datetime64):
from pandas.tseries.index import DatetimeIndex
result = DatetimeIndex(data, copy=copy, name=name)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 2dca8a2aef801..df4146e8b0a54 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -212,6 +212,9 @@ def __init__(self, data=None, items=None, major_axis=None, minor_axis=None,
if data is None:
data = {}
+ for x in (data, items, major_axis,minor_axis):
+ com.assert_ndarray_endianess(x)
+
passed_axes = [items, major_axis, minor_axis]
axes = None
if isinstance(data, BlockManager):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 3241044a63c68..5d90ec862ae8d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -409,7 +409,9 @@ def __init__(self, data=None, index=None, dtype=None, name=None,
input data
copy : boolean, default False
"""
- pass
+
+ for x in (data, index):
+ com.assert_ndarray_endianess(x)
@property
def _constructor(self):
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index c38936b55696f..0cd966ddc6884 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -75,6 +75,9 @@ def __init__(self, data=None, index=None, columns=None,
self.default_kind = default_kind
self.default_fill_value = default_fill_value
+ for x in (data, index, columns):
+ com.assert_ndarray_endianess(x)
+
if isinstance(data, dict):
sdict, columns, index = self._init_dict(data, index, columns)
elif isinstance(data, (np.ndarray, list)):
diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py
index bd5a2785aba2b..23a79c474a181 100644
--- a/pandas/sparse/panel.py
+++ b/pandas/sparse/panel.py
@@ -76,6 +76,9 @@ def __init__(self, frames, items=None, major_axis=None, minor_axis=None,
self.default_fill_value = fill_value = default_fill_value
self.default_kind = kind = default_kind
+ for x in (items, major_axis,minor_axis):
+ com.assert_ndarray_endianess(x)
+
# pre-filter, if necessary
if items is None:
items = Index(sorted(frames.keys()))
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 8be9e2b5c7d75..39dc201583fff 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -14,7 +14,7 @@
from pandas.core.index import Index, _ensure_index
from pandas.core.series import Series, TimeSeries, _maybe_match_name
from pandas.core.frame import DataFrame
-import pandas.core.common as common
+import pandas.core.common as com
import pandas.core.datetools as datetools
from pandas.util import py3compat
@@ -82,6 +82,9 @@ class SparseSeries(SparseArray, Series):
def __new__(cls, data, index=None, sparse_index=None, kind='block',
fill_value=None, name=None, copy=False):
+ for x in (data, index):
+ com.assert_ndarray_endianess(x)
+
is_sparse_array = isinstance(data, SparseArray)
if fill_value is None:
if is_sparse_array:
@@ -412,7 +415,7 @@ def reindex(self, index=None, method=None, copy=True, limit=None):
new_index, fill_vec = self.index.reindex(index, method=method,
limit=limit)
- new_values = common.take_1d(self.values, fill_vec)
+ new_values = com.take_1d(self.values, fill_vec)
return SparseSeries(new_values, index=new_index,
fill_value=self.fill_value, name=self.name)
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 44c40b6930784..0f04419837d80 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -14,6 +14,20 @@
from pandas.util import py3compat
+def test_assert_endianess():
+ little_endian = (sys.byteorder == 'little')
+ if little_endian:
+ arr = np.array([1], dtype='>i8')
+ else:
+ arr = np.array([1], dtype='<i8')
+
+ try:
+ DataFrame(arr)
+ except:
+ pass
+ else:
+ assert False,"did not raise ValueError on wrong endianess"
+
def test_is_sequence():
is_seq=com._is_sequence
assert(is_seq((1,2)))
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index fea84f5a86e36..09df9331c8b57 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1519,6 +1519,10 @@ def setUp(self):
self.simple = DataFrame(arr, columns=['one', 'two', 'three'],
index=['a', 'b', 'c'])
+ def test_wrong_endianess_caught(self):
+ arr = np.array([1], dtype='>i8')
+ self.assertRaises(ValueError,DataFrame,arr)
+
def test_get_axis(self):
self.assert_(DataFrame._get_axis_name(0) == 'index')
self.assert_(DataFrame._get_axis_name(1) == 'columns')
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index b94840d0dfd85..4f6c112705569 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -35,6 +35,10 @@ def setUp(self):
self.empty = Index([])
self.tuples = Index(zip(['foo', 'bar', 'baz'], [1, 2, 3]))
+ def test_wrong_endianess_caught(self):
+ arr = np.array([1], dtype='>i8')
+ self.assertRaises(ValueError,Index,arr)
+
def test_hash_error(self):
self.assertRaises(TypeError, hash, self.strIndex)
@@ -864,6 +868,7 @@ def setUp(self):
labels=[major_labels, minor_labels],
names=['first', 'second'])
+
def test_constructor_single_level(self):
single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']],
labels=[[0, 1, 2, 3]],
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index a906489e67b57..89c5d3410aa6a 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -232,6 +232,10 @@ def setUp(self):
self.empty = Series([], index=[])
+ def test_wrong_endianess_caught(self):
+ arr = np.array([1], dtype='>i8')
+ self.assertRaises(ValueError,Series,arr)
+
def test_constructor(self):
# Recognize TimeSeries
self.assert_(isinstance(self.ts, TimeSeries))
| apropos #2318
| https://api.github.com/repos/pandas-dev/pandas/pulls/2330 | 2012-11-22T19:48:44Z | 2013-02-08T11:03:02Z | null | 2014-07-04T18:49:30Z |
Fixes for #1000, to_string(), to_html() should respect col_space | diff --git a/doc/source/io.rst b/doc/source/io.rst
index f74120ad7ef57..3fbc45dda8fa4 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -668,7 +668,7 @@ over the string representation of the object. All arguments are optional:
- ``buf`` default None, for example a StringIO object
- ``columns`` default None, which columns to write
- - ``col_space`` default None, number of spaces to write between columns
+ - ``col_space`` default None, minimum width of each column.
- ``na_rep`` default ``NaN``, representation of NA value
- ``formatters`` default None, a dictionary (by column) of functions each of
which takes a single argument and returns a formatted string
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 13e504a8e1f88..841500329d4a9 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -27,7 +27,7 @@
columns : sequence, optional
the subset of columns to write; default None writes all columns
col_space : int, optional
- the width of each columns
+ the minimum width of each column
header : bool, optional
whether to print column labels, default True
index : bool, optional
@@ -215,7 +215,7 @@ def _to_str_columns(self, force_unicode=False):
fmt_values = self._format_col(i)
cheader = str_columns[i]
- max_colwidth = max(_strlen(x) for x in cheader)
+ max_colwidth = max(self.col_space or 0, *(_strlen(x) for x in cheader))
fmt_values = _make_fixed_width(fmt_values, self.justify,
minimum=max_colwidth)
@@ -434,6 +434,11 @@ def write(self, s, indent=0):
self.elements.append(' ' * indent + com.pprint_thing(s))
def write_th(self, s, indent=0, tags=None):
+ if (self.fmt.col_space is not None
+ and self.fmt.col_space > 0 ):
+ tags = (tags or "" )
+ tags += 'style="min-width: %s;"' % self.fmt.col_space
+
return self._write_cell(s, kind='th', indent=indent, tags=tags)
def write_td(self, s, indent=0, tags=None):
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 10bb75bfbb5b6..8d0dacf2e7edd 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -182,6 +182,30 @@ def test_to_string_buffer_all_unicode(self):
# this should work
buf.getvalue()
+ def test_to_string_with_col_space(self):
+ df = DataFrame(np.random.random(size=(1,3)))
+ c10=len(df.to_string(col_space=10).split("\n")[1])
+ c20=len(df.to_string(col_space=20).split("\n")[1])
+ c30=len(df.to_string(col_space=30).split("\n")[1])
+ self.assertTrue( c10 < c20 < c30 )
+
+ def test_to_html_with_col_space(self):
+ def check_with_width(df,col_space):
+ import re
+ # check that col_space affects HTML generation
+ # and be very brittle about it.
+ html = df.to_html(col_space=col_space)
+ hdrs = [x for x in html.split("\n") if re.search("<th[>\s]",x)]
+ self.assertTrue(len(hdrs) > 0 )
+ for h in hdrs:
+ self.assertTrue("min-width" in h )
+ self.assertTrue(str(col_space) in h )
+
+ df = DataFrame(np.random.random(size=(1,3)))
+
+ check_with_width(df,30)
+ check_with_width(df,50)
+
def test_to_html_unicode(self):
# it works!
df = DataFrame({u'\u03c3' : np.arange(10.)})
| Note that the semantics of `col_space` are different in each case, characters vs. pixels,
but that's reasonable.
I hope this doesn't presage more html configuration via kwd arguments, yonder way
madness lies.
also, there's a discrepency between the docstring and [io.rst](https://github.com/pydata/pandas/blame/master/doc/source/io.rst#L671) (github doesn't seem to jump to the right line)
the first defines it as the width of the columns, the latter as the number of spaces between columns,
I adopted the former, since space between columns does not translate as easily to html.
_sigh_, also, `colSpace` has already been deprecated in favor of `col_space`, so
I would feel bad about deprecating `col_space` in favor of `min_col_width`.
I'll leave that decision to braver, fearless souls.
closes #1000.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2328 | 2012-11-22T17:26:09Z | 2012-11-29T20:32:03Z | 2012-11-29T20:32:02Z | 2014-06-25T18:33:51Z |
CLN: Dropped python 2.5 support | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 30f6b66a2b7d3..71d767a57bccb 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -8,13 +8,6 @@
import itertools
-try:
- next
-except NameError: # pragma: no cover
- # Python < 2.6
- def next(x):
- return x.next()
-
from numpy.lib.format import read_array, write_array
import numpy as np
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f7f296e822e15..44b31926f4e15 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1,5 +1,3 @@
-from __future__ import with_statement
-
"""
DataFrame
---------
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 52c8c4aa65a13..a5fc7ebeed101 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -7,13 +7,6 @@
from urlparse import urlparse
import csv
-try:
- next
-except NameError: # pragma: no cover
- # Python < 2.6
- def next(x):
- return x.next()
-
import numpy as np
from pandas.core.index import Index, MultiIndex
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 9442f274a7810..afd05610e3427 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1,5 +1,3 @@
-from __future__ import with_statement
-
import nose
import unittest
import os
@@ -723,4 +721,3 @@ def _test_sort(obj):
import nose
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
-
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 44c40b6930784..661c3a2a3edd8 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -1,4 +1,3 @@
-from __future__ import with_statement
from datetime import datetime
import sys
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 708f8143de3d5..05f9e51150850 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -1,5 +1,3 @@
-from __future__ import with_statement
-
import nose
import os
import string
@@ -460,9 +458,9 @@ def test_parallel_coordinates(self):
path = os.path.join(curpath(), 'data/iris.csv')
df = read_csv(path)
_check_plot_works(parallel_coordinates, df, 'Name')
- _check_plot_works(parallel_coordinates, df, 'Name',
+ _check_plot_works(parallel_coordinates, df, 'Name',
colors=('#556270', '#4ECDC4', '#C7F464'))
- _check_plot_works(parallel_coordinates, df, 'Name',
+ _check_plot_works(parallel_coordinates, df, 'Name',
colors=['dodgerblue', 'aquamarine', 'seagreen'])
@slow
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 86feb68052f67..6cec53eff382f 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1,5 +1,4 @@
# pylint: disable-msg=E1101,W0612
-from __future__ import with_statement # for Python 2.5
import pandas.util.compat as itertools
from datetime import datetime, time, timedelta
import sys
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 688e4945b6eb3..5da018b54ad4a 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -1,5 +1,4 @@
# pylint: disable-msg=E1101,W0612
-from __future__ import with_statement # for Python 2.5
from datetime import datetime, time, timedelta, tzinfo
import sys
import os
diff --git a/tox.ini b/tox.ini
index 7d09b3aa887e1..9f8e1af8ae924 100644
--- a/tox.ini
+++ b/tox.ini
@@ -4,7 +4,7 @@
# and then run "tox" from this directory.
[tox]
-envlist = py25, py26, py27, py31, py32
+envlist = py26, py27, py31, py32
[testenv]
deps =
@@ -35,15 +35,6 @@ commands =
# tox should provide a preinstall-commands hook.
pip uninstall pandas -qy
-
-[testenv:py25]
-deps =
- cython
- numpy >= 1.6.1
- nose
- pytz
- simplejson
-
[testenv:py26]
[testenv:py27]
diff --git a/tox_prll.ini b/tox_prll.ini
index 85856db064ca3..70edffac717a2 100644
--- a/tox_prll.ini
+++ b/tox_prll.ini
@@ -4,7 +4,7 @@
# and then run "tox" from this directory.
[tox]
-envlist = py25, py26, py27, py31, py32
+envlist = py26, py27, py31, py32
sdistsrc = {env:DISTFILE}
[testenv]
@@ -36,15 +36,6 @@ commands =
# tox should provide a preinstall-commands hook.
pip uninstall pandas -qy
-
-[testenv:py25]
-deps =
- cython
- numpy >= 1.6.1
- nose
- pytz
- simplejson
-
[testenv:py26]
[testenv:py27]
| https://api.github.com/repos/pandas-dev/pandas/pulls/2323 | 2012-11-22T00:47:48Z | 2012-11-23T04:52:17Z | 2012-11-23T04:52:17Z | 2014-06-19T10:32:41Z | |
Make sparse DataFrame and Series construction more efficient | diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index c38936b55696f..58d3a90e9b7db 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -92,7 +92,9 @@ def __init__(self, data=None, index=None, columns=None,
columns = Index([])
else:
for c in columns:
- sdict[c] = Series(np.nan, index=index)
+ sdict[c] = SparseSeries(np.nan, index=index,
+ kind=self.default_kind,
+ fill_value=self.default_fill_value)
self._series = sdict
self.columns = columns
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 8be9e2b5c7d75..ae9bda78d2c61 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -108,13 +108,23 @@ def __new__(cls, data, index=None, sparse_index=None, kind='block',
if index is None:
raise Exception('must pass index!')
- values = np.empty(len(index))
- values.fill(data)
-
- # TODO: more efficient
-
- values, sparse_index = make_sparse(values, kind=kind,
- fill_value=fill_value)
+ length = len(index)
+
+ if data == fill_value or (np.isnan(data)
+ and np.isnan(fill_value)):
+ if kind == 'block':
+ sparse_index = BlockIndex(length, [], [])
+ else:
+ sparse_index = IntIndex(length, [])
+ values = np.array([])
+ else:
+ if kind == 'block':
+ locs, lens = ([0], [length]) if length else ([], [])
+ sparse_index = BlockIndex(length, locs, lens)
+ else:
+ sparse_index = IntIndex(length, index)
+ values = np.empty(length)
+ values.fill(data)
else:
# array-like
diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py
index 87ffa0822aedf..076122b68f476 100644
--- a/pandas/sparse/tests/test_sparse.py
+++ b/pandas/sparse/tests/test_sparse.py
@@ -730,6 +730,12 @@ def test_constructor(self):
assert_almost_equal([0, 0, 0, 0, 1, 2, 3, 4, 5, 6],
self.zframe['A'].values)
+ # construct no data
+ sdf = SparseDataFrame(columns=np.arange(10), index=np.arange(10))
+ for col, series in sdf.iteritems():
+ self.assert_(isinstance(series, SparseSeries))
+
+
# construct from nested dict
data = {}
for c, s in self.frame.iteritems():
| Cleans up SparseSeries and SparseDataFrame initialization code so no objects are created unnecessarily. Currently if you run the (potentially useless) statement:
```
SparseDataFrame(columns=np.arange(100000), index=np.arange(100000))
```
pandas will consume memory and time filling in empty Series objects only to then compress them away.
Added to test to ensure SparseDataFrame always initializes with SparseSeries. Otherwise functionality should be preserved.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2322 | 2012-11-21T23:56:33Z | 2012-11-24T23:05:22Z | null | 2014-06-12T06:37:53Z |
BUG: Fix long index repr #2319 | diff --git a/pandas/core/index.py b/pandas/core/index.py
index b7792309f66ff..12d2574d5efb6 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -133,11 +133,15 @@ def _shallow_copy(self):
return self.view()
def __repr__(self):
+ if len(self) > 6 and len(self) > np.get_printoptions()['threshold']:
+ data = self[:3].tolist() + ["..."] + self[-3:].tolist()
+ else:
+ data = self
if py3compat.PY3:
- prepr = com.pprint_thing(self)
+ prepr = com.pprint_thing(data)
else:
- prepr = com.pprint_thing_encoded(self)
- return 'Index(%s, dtype=%s)' % (prepr, self.dtype)
+ prepr = com.pprint_thing_encoded(data)
+ return '%s(%s, dtype=%s)' % (type(self).__name__, prepr, self.dtype)
def astype(self, dtype):
return Index(self.values.astype(dtype), name=self.name,
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index b94840d0dfd85..4a2a64ead366a 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -851,6 +851,11 @@ def test_print_unicode_columns(self):
df=pd.DataFrame({u"\u05d0":[1,2,3],"\u05d1":[4,5,6],"c":[7,8,9]})
print(df.columns) # should not raise UnicodeDecodeError
+ def test_repr_summary(self):
+ r = repr(pd.Index(np.arange(10000)))
+ self.assertTrue(len(r) < 100)
+ self.assertTrue( "..." in r)
+
class TestMultiIndex(unittest.TestCase):
def setUp(self):
| ``` python
In [2]: pd.Index(np.arange(10000))
Out[2]: Int64Index([0, 1, 2, ..., 9997, 9998, 9999], dtype=int64)
```
broken by 95678ebf.
configurable via `np.set_printoptions (threshold)`, like the original version which used np.ndarray.**repr**.
closes #2319
| https://api.github.com/repos/pandas-dev/pandas/pulls/2321 | 2012-11-21T22:41:16Z | 2012-11-23T04:55:23Z | null | 2014-06-14T08:30:00Z |
BUG: PeriodIndex as DataFrame column should box to Periods #2243 #2281 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f7f296e822e15..a78710d49bbd3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2024,6 +2024,8 @@ def _sanitize_column(self, key, value):
if not isinstance(value, np.ndarray):
value = com._asarray_tuplesafe(value)
+ elif isinstance(value, PeriodIndex):
+ value = value.asobject
else:
value = value.copy()
else:
@@ -2701,7 +2703,11 @@ def _maybe_cast(values):
lev_num = self.columns._get_level_number(col_level)
name_lst[lev_num] = name
name = tuple(name_lst)
- new_obj.insert(0, name, _maybe_cast(self.index.values))
+ if isinstance(self.index, PeriodIndex):
+ values = self.index.asobject
+ else:
+ values = self.index.values
+ new_obj.insert(0, name, _maybe_cast(values))
new_obj.index = new_index
return new_obj
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 0938080064efb..dfa4b341d883f 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -15,7 +15,7 @@
from pandas import Timestamp
from pandas.tseries.frequencies import MONTHS, DAYS
from pandas.tseries.period import Period, PeriodIndex, period_range
-from pandas.tseries.index import DatetimeIndex, date_range
+from pandas.tseries.index import DatetimeIndex, date_range, Index
from pandas.tseries.tools import to_datetime
import pandas.tseries.period as pmod
@@ -1300,6 +1300,19 @@ def test_as_frame_columns(self):
ts = df['1/1/2000']
assert_series_equal(ts, df.ix[:, 0])
+ def test_frame_setitem(self):
+ rng = period_range('1/1/2000', periods=5)
+ rng.name = 'index'
+ df = DataFrame(randn(5, 3), index=rng)
+
+ df['Index'] = rng
+ rs = Index(df['Index'])
+ self.assert_(rs.equals(rng))
+
+ rs = df.reset_index().set_index('index')
+ self.assert_(isinstance(rs.index, PeriodIndex))
+ self.assert_(rs.index.equals(rng))
+
def test_nested_dict_frame_constructor(self):
rng = period_range('1/1/2000', periods=5)
df = DataFrame(randn(10, 5), columns=rng)
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 0d4ab5552aeec..cbb187aa2a619 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -156,3 +156,14 @@ def date_range(start=None, end=None, periods=None, freq=None):
timeseries_infer_freq = \
Benchmark('infer_freq(a)', setup, start_date=datetime(2012, 7, 1))
+
+# setitem PeriodIndex
+
+setup = common_setup + """
+rng = period_range('1/1/1990', freq='S', periods=100000)
+df = DataFrame(index=range(len(rng)))
+"""
+
+period_setitem = \
+ Benchmark("df['col'] = rng", setup,
+ start_date=datetime(2012, 8, 1))
| https://api.github.com/repos/pandas-dev/pandas/pulls/2310 | 2012-11-21T05:26:35Z | 2012-11-23T05:43:22Z | null | 2012-11-23T05:43:22Z | |
ENH: DataFrame.from_records takes iterator #1794 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 14b435e0aafc8..218c04b6fd9e5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -872,7 +872,7 @@ def to_dict(self, outtype='dict'):
@classmethod
def from_records(cls, data, index=None, exclude=None, columns=None,
- coerce_float=False):
+ coerce_float=False, nrows=None):
"""
Convert structured or record ndarray to DataFrame
@@ -906,6 +906,33 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
raise ValueError('Non-unique columns not yet supported in '
'from_records')
+ if com.is_iterator(data):
+ if nrows == 0:
+ return DataFrame()
+
+ try:
+ first_row = data.next()
+ except StopIteration:
+ return DataFrame(index=index, columns=columns)
+
+ dtype = None
+ if hasattr(first_row, 'dtype') and first_row.dtype.names:
+ dtype = first_row.dtype
+
+ values = [first_row]
+
+ i = 1
+ for row in data:
+ values.append(row)
+ i += 1
+ if i >= nrows:
+ break
+
+ if dtype is not None:
+ data = np.array(values, dtype=dtype)
+ else:
+ data = values
+
if isinstance(data, (np.ndarray, DataFrame, dict)):
keys, sdict = _rec_to_dict(data)
if columns is None:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 042c744ef167a..fea84f5a86e36 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2639,6 +2639,19 @@ def test_from_records_nones(self):
df = DataFrame.from_records(tuples, columns=['a', 'b', 'c', 'd'])
self.assert_(np.isnan(df['c'][0]))
+ def test_from_records_iterator(self):
+ arr = np.array([(1.0, 2), (3.0, 4), (5., 6), (7., 8)],
+ dtype=[('x', float), ('y', int)])
+ df = DataFrame.from_records(iter(arr), nrows=2)
+ xp = DataFrame({'x' : np.array([1.0, 3.0], dtype=float),
+ 'y' : np.array([2, 4], dtype=int)})
+ assert_frame_equal(df, xp)
+
+ arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)]
+ df = DataFrame.from_records(iter(arr), columns=['x', 'y'],
+ nrows=2)
+ assert_frame_equal(df, xp)
+
def test_from_records_columns_not_modified(self):
tuples = [(1, 2, 3),
(1, 2, 3),
| @wesm are there other cases you can think of we should add as test case?
| https://api.github.com/repos/pandas-dev/pandas/pulls/2297 | 2012-11-19T23:20:40Z | 2012-11-20T19:29:28Z | null | 2012-11-20T19:29:28Z |
BUG: remove asserts from core #2288 | diff --git a/pandas/core/format.py b/pandas/core/format.py
index d87d2006785db..13e504a8e1f88 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -293,8 +293,9 @@ def to_latex(self, force_unicode=False, column_format=None):
if column_format is None:
column_format = '|l|%s|' % '|'.join('c' for _ in strcols)
- else:
- assert isinstance(column_format, basestring)
+ elif not isinstance(column_format, basestring):
+ raise AssertionError(('column_format must be str or unicode, not %s'
+ % type(column_format)))
self.buf.write('\\begin{tabular}{%s}\n' % column_format)
self.buf.write('\\hline\n')
@@ -474,7 +475,9 @@ def write_result(self, buf):
if self.classes is not None:
if isinstance(self.classes, str):
self.classes = self.classes.split()
- assert isinstance(self.classes, (list, tuple))
+ if not isinstance(self.classes, (list, tuple)):
+ raise AssertionError(('classes must be list or tuple, '
+ 'not %s') % type(self.classes))
_classes.extend(self.classes)
self.write('<table border="1" class="%s">' % ' '.join(_classes),
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 14b435e0aafc8..c20a08e4e76ef 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1111,8 +1111,9 @@ def to_panel(self):
from pandas.core.reshape import block2d_to_block3d
# only support this kind for now
- assert(isinstance(self.index, MultiIndex) and
- len(self.index.levels) == 2)
+ if (not isinstance(self.index, MultiIndex) or
+ len(self.index.levels) != 2):
+ raise AssertionError('Must have 2-level MultiIndex')
self._consolidate_inplace()
@@ -1476,7 +1477,8 @@ def info(self, verbose=True, buf=None):
lines.append('Data columns:')
space = max([len(com.pprint_thing(k)) for k in self.columns]) + 4
counts = self.count()
- assert(len(cols) == len(counts))
+ if len(cols) != len(counts):
+ raise AssertionError('Columns must equal counts')
for col, count in counts.iteritems():
if not isinstance(col, basestring):
col = str(col)
@@ -1935,7 +1937,8 @@ def _boolean_set(self, key, value):
def _set_item_multiple(self, keys, value):
if isinstance(value, DataFrame):
- assert(len(value.columns) == len(keys))
+ if len(value.columns) != len(keys):
+ raise AssertionError('Columns must be same length as keys')
for k1, k2 in zip(keys, value.columns):
self[k1] = value[k2]
else:
@@ -2168,7 +2171,8 @@ def lookup(self, row_labels, col_labels):
from itertools import izip
n = len(row_labels)
- assert(n == len(col_labels))
+ if n != len(col_labels):
+ raise AssertionError('Row labels must have same size as col labels')
thresh = 1000
if not self._is_mixed_type or n > thresh:
@@ -2946,7 +2950,8 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False):
labels = self._get_axis(axis)
if by is not None:
- assert(axis == 0)
+ if axis != 0:
+ raise AssertionError('Axis must be 0')
if isinstance(by, (tuple, list)):
keys = [self[x].values for x in by]
indexer = _lexsort_indexer(keys, orders=ascending)
@@ -3907,7 +3912,8 @@ def _apply_raw(self, func, axis):
def _apply_standard(self, func, axis, ignore_failures=False):
try:
- assert(not self._is_mixed_type) # maybe a hack for now
+ if self._is_mixed_type: # maybe a hack for now
+ raise AssertionError('Must be mixed type DataFrame')
values = self.values
dummy = Series(NA, index=self._get_axis(axis),
dtype=values.dtype)
@@ -4124,7 +4130,8 @@ def _join_compat(self, other, on=None, how='left', lsuffix='', rsuffix='',
from pandas.tools.merge import merge, concat
if isinstance(other, Series):
- assert(other.name is not None)
+ if other.name is None:
+ raise AssertionError('Other Series must have a name')
other = DataFrame({other.name: other})
if isinstance(other, DataFrame):
@@ -5003,7 +5010,8 @@ def group_agg(values, bounds, f):
result = np.empty((len(bounds), K), dtype=float)
testagg = f(values[:min(1, len(values))])
- assert(not (isinstance(testagg, np.ndarray) and testagg.ndim == 2))
+ if isinstance(testagg, np.ndarray) and testagg.ndim == 2:
+ raise AssertionError('Function must reduce')
for i, left_bound in enumerate(bounds):
if i == len(bounds) - 1:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 4f73d68841739..ef238b40d6f75 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -340,7 +340,8 @@ def drop(self, labels, axis=0, level=None):
if axis.is_unique:
if level is not None:
- assert(isinstance(axis, MultiIndex))
+ if not isinstance(axis, MultiIndex):
+ raise AssertionError('axis must be a MultiIndex')
new_axis = axis.drop(labels, level=level)
else:
new_axis = axis.drop(labels)
@@ -348,7 +349,8 @@ def drop(self, labels, axis=0, level=None):
return self.reindex(**{axis_name: new_axis})
else:
if level is not None:
- assert(isinstance(axis, MultiIndex))
+ if not isinstance(axis, MultiIndex):
+ raise AssertionError('axis must be a MultiIndex')
indexer = -lib.ismember(axis.get_level_values(level),
set(labels))
else:
@@ -1012,7 +1014,9 @@ def truncate(self, before=None, after=None, copy=True):
after = to_datetime(after)
if before is not None and after is not None:
- assert(before <= after)
+ if before > after:
+ raise AssertionError('Truncate: %s must be after %s' %
+ (before, after))
result = self.ix[before:after]
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index f09e2d5435fe6..6705df6816438 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -830,12 +830,9 @@ def _aggregate_series_pure_python(self, obj, func):
axis=self.axis):
res = func(group)
if result is None:
- try:
- assert(not isinstance(res, np.ndarray))
- assert(not isinstance(res, list))
- result = np.empty(ngroups, dtype='O')
- except Exception:
- raise ValueError('function does not reduce')
+ if isinstance(res, np.ndarray) or isinstance(res, list):
+ raise ValueError('Function does not reduce')
+ result = np.empty(ngroups, dtype='O')
counts[label] = group.shape[0]
result[label] = res
@@ -1040,7 +1037,8 @@ def __init__(self, index, grouper=None, name=None, level=None,
if level is not None:
if not isinstance(level, int):
- assert(level in index.names)
+ if level not in index.names:
+ raise AssertionError('Level %s not in index' % str(level))
level = index.names.index(level)
inds = index.labels[level]
@@ -1231,7 +1229,8 @@ def _convert_grouper(axis, grouper):
else:
return grouper.reindex(axis).values
elif isinstance(grouper, (list, np.ndarray)):
- assert(len(grouper) == len(axis))
+ if len(grouper) != len(axis):
+ raise AssertionError('Grouper and axis must be same length')
return grouper
else:
return grouper
@@ -1629,7 +1628,8 @@ def _aggregate_multiple_funcs(self, arg):
return result
def _aggregate_generic(self, func, *args, **kwargs):
- assert(self.grouper.nkeys == 1)
+ if self.grouper.nkeys != 1:
+ raise AssertionError('Number of keys must be 1')
axis = self.axis
obj = self._obj_with_exclusions
@@ -2061,7 +2061,9 @@ def _get_slice(slob):
# Since I'm now compressing the group ids, it's now not "possible" to
# produce empty slices because such groups would not be observed in the
# data
- assert(start < end)
+ if start >= end:
+ raise AssertionError('Start %s must be less than end %s'
+ % (str(start), str(end)))
yield i, _get_slice(slice(start, end))
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 6098d0a379814..b7792309f66ff 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -180,7 +180,9 @@ def _get_names(self):
return [self.name]
def _set_names(self, values):
- assert(len(values) == 1)
+ if len(values) != 1:
+ raise AssertionError('Length of new names must be 1, got %d'
+ % len(values))
self.name = values[0]
names = property(fset=_set_names, fget=_get_names)
@@ -255,7 +257,9 @@ def _engine(self):
def _get_level_number(self, level):
if not isinstance(level, int):
- assert(level == self.name)
+ if level != self.name:
+ raise AssertionError('Level %s must be same as name (%s)'
+ % (level, self.name))
level = 0
return level
@@ -749,10 +753,12 @@ def get_indexer(self, target, method=None, limit=None):
'objects')
if method == 'pad':
- assert(self.is_monotonic)
+ if not self.is_monotonic:
+ raise AssertionError('Must be monotonic for forward fill')
indexer = self._engine.get_pad_indexer(target.values, limit)
elif method == 'backfill':
- assert(self.is_monotonic)
+ if not self.is_monotonic:
+ raise AssertionError('Must be monotonic for backward fill')
indexer = self._engine.get_backfill_indexer(target.values, limit)
elif method is None:
indexer = self._engine.get_indexer(target.values)
@@ -1249,7 +1255,8 @@ class MultiIndex(Index):
names = None
def __new__(cls, levels=None, labels=None, sortorder=None, names=None):
- assert(len(levels) == len(labels))
+ if len(levels) != len(labels):
+ raise AssertionError('Length of levels and labels must be the same')
if len(levels) == 0:
raise Exception('Must pass non-zero number of levels/labels')
@@ -1272,7 +1279,10 @@ def __new__(cls, levels=None, labels=None, sortorder=None, names=None):
if names is None:
subarr.names = [None] * subarr.nlevels
else:
- assert(len(names) == subarr.nlevels)
+ if len(names) != subarr.nlevels:
+ raise AssertionError(('Length of names must be same as level '
+ '(%d), got %d') % (subarr.nlevels))
+
subarr.names = list(names)
# set the name
@@ -1815,7 +1825,10 @@ def reorder_levels(self, order):
----------
"""
order = [self._get_level_number(i) for i in order]
- assert(len(order) == self.nlevels)
+ if len(order) != self.nlevels:
+ raise AssertionError(('Length of order must be same as '
+ 'number of levels (%d), got %d')
+ % (self.nlevels, len(order)))
new_levels = [self.levels[i] for i in order]
new_labels = [self.labels[i] for i in order]
new_names = [self.names[i] for i in order]
@@ -1912,11 +1925,15 @@ def get_indexer(self, target, method=None, limit=None):
self_index = self._tuple_index
if method == 'pad':
- assert(self.is_unique and self.is_monotonic)
+ if not self.is_unique or not self.is_monotonic:
+ raise AssertionError(('Must be unique and monotonic to '
+ 'use forward fill getting the indexer'))
indexer = self_index._engine.get_pad_indexer(target_index,
limit=limit)
elif method == 'backfill':
- assert(self.is_unique and self.is_monotonic)
+ if not self.is_unique or not self.is_monotonic:
+ raise AssertionError(('Must be unique and monotonic to '
+ 'use backward fill getting the indexer'))
indexer = self_index._engine.get_backfill_indexer(target_index,
limit=limit)
else:
@@ -2083,7 +2100,9 @@ def _drop_levels(indexer, levels):
return new_index
if isinstance(level, (tuple, list)):
- assert(len(key) == len(level))
+ if len(key) != len(level):
+ raise AssertionError('Key for location must have same '
+ 'length as number of levels')
result = None
for lev, k in zip(level, key):
loc, new_index = self.get_loc_level(k, level=lev)
@@ -2329,7 +2348,8 @@ def _assert_can_do_setop(self, other):
raise TypeError('can only call with other hierarchical '
'index objects')
- assert(self.nlevels == other.nlevels)
+ if self.nlevels != other.nlevels:
+ raise AssertionError('Must have same number of levels')
def insert(self, loc, item):
"""
@@ -2477,7 +2497,8 @@ def _get_distinct_indexes(indexes):
def _union_indexes(indexes):
- assert(len(indexes) > 0)
+ if len(indexes) == 0:
+ raise AssertionError('Must have at least 1 Index to union')
if len(indexes) == 1:
result = indexes[0]
if isinstance(result, list):
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 0cfb4004708fa..d0410aa06f5b8 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -358,7 +358,8 @@ def _reindex(keys, level=None):
return self.obj.reindex_axis(keys, axis=axis, level=level)
except AttributeError:
# Series
- assert(axis == 0)
+ if axis != 0:
+ raise AssertionError('axis must be 0')
return self.obj.reindex(keys, level=level)
if com._is_bool_indexer(key):
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index c868216594eb3..5f4434d4da215 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -43,7 +43,9 @@ def ref_locs(self):
if self._ref_locs is None:
indexer = self.ref_items.get_indexer(self.items)
indexer = com._ensure_platform_int(indexer)
- assert((indexer != -1).all())
+ if (indexer == -1).any():
+ raise AssertionError('Some block items were not in block '
+ 'ref_items')
self._ref_locs = indexer
return self._ref_locs
@@ -51,7 +53,8 @@ def set_ref_items(self, ref_items, maybe_rename=True):
"""
If maybe_rename=True, need to set the items for this guy
"""
- assert(isinstance(ref_items, Index))
+ if not isinstance(ref_items, Index):
+ raise AssertionError('block ref_items must be an Index')
if maybe_rename:
self.items = ref_items.take(self.ref_locs)
self.ref_items = ref_items
@@ -98,7 +101,8 @@ def copy(self, deep=True):
return make_block(values, self.items, self.ref_items)
def merge(self, other):
- assert(self.ref_items.equals(other.ref_items))
+ if not self.ref_items.equals(other.ref_items):
+ raise AssertionError('Merge operands must have same ref_items')
# Not sure whether to allow this or not
# if not union_ref.equals(other.ref_items):
@@ -287,7 +291,8 @@ def interpolate(self, method='pad', axis=0, inplace=False,
return make_block(values, self.items, self.ref_items)
def take(self, indexer, axis=1, fill_value=np.nan):
- assert(axis >= 1)
+ if axis < 1:
+ raise AssertionError('axis must be at least 1, got %d' % axis)
new_values = com.take_fast(self.values, indexer, None,
None, axis=axis,
fill_value=fill_value)
@@ -496,7 +501,11 @@ def __init__(self, blocks, axes, do_integrity_check=True):
ndim = len(axes)
for block in blocks:
- assert(ndim == block.values.ndim)
+ if ndim != block.values.ndim:
+ raise AssertionError(('Number of Block dimensions (%d) must '
+ 'equal number of axes (%d)')
+ % (block.values.ndim, ndim))
+
if do_integrity_check:
self._verify_integrity()
@@ -580,10 +589,15 @@ def shape(self):
def _verify_integrity(self):
mgr_shape = self.shape
for block in self.blocks:
- assert(block.ref_items is self.items)
- assert(block.values.shape[1:] == mgr_shape[1:])
+ if block.ref_items is not self.items:
+ raise AssertionError("Block ref_items must be BlockManager "
+ "items")
+ if block.values.shape[1:] != mgr_shape[1:]:
+ raise AssertionError('Block shape incompatible with manager')
tot_items = sum(len(x.items) for x in self.blocks)
- assert(len(self.items) == tot_items)
+ if len(self.items) != tot_items:
+ raise AssertionError('Number of manager items must equal union of '
+ 'block items')
def astype(self, dtype):
new_blocks = []
@@ -751,23 +765,28 @@ def _interleave(self, items):
if items.is_unique:
for block in self.blocks:
indexer = items.get_indexer(block.items)
- assert((indexer != -1).all())
+ if (indexer == -1).any():
+ raise AssertionError('Items must contain all block items')
result[indexer] = block.get_values(dtype)
itemmask[indexer] = 1
else:
for block in self.blocks:
mask = items.isin(block.items)
indexer = mask.nonzero()[0]
- assert(len(indexer) == len(block.items))
+ if (len(indexer) != len(block.items)):
+ raise AssertionError('All items must be in block items')
result[indexer] = block.get_values(dtype)
itemmask[indexer] = 1
- assert(itemmask.all())
+ if not itemmask.all():
+ raise AssertionError('Some items were not contained in blocks')
return result
def xs(self, key, axis=1, copy=True):
- assert(axis >= 1)
+ if axis < 1:
+ raise AssertionError('Can only take xs across axis >= 1, got %d'
+ % axis)
loc = self.axes[axis].get_loc(key)
slicer = [slice(None, None) for _ in range(self.ndim)]
@@ -899,7 +918,9 @@ def set(self, item, value):
"""
if value.ndim == self.ndim - 1:
value = value.reshape((1,) + value.shape)
- assert(value.shape[1:] == self.shape[1:])
+ if value.shape[1:] != self.shape[1:]:
+ raise AssertionError('Shape of new values must be compatible '
+ 'with manager shape')
if item in self.items:
i, block = self._find_block(item)
if not block.should_store(value):
@@ -984,7 +1005,9 @@ def reindex_axis(self, new_axis, method=None, axis=0, copy=True):
return self
if axis == 0:
- assert(method is None)
+ if method is not None:
+ raise AssertionError('method argument not supported for '
+ 'axis == 0')
return self.reindex_items(new_axis)
new_axis, indexer = cur_axis.reindex(new_axis, method)
@@ -1118,7 +1141,8 @@ def take(self, indexer, axis=1):
return BlockManager(new_blocks, new_axes)
def merge(self, other, lsuffix=None, rsuffix=None):
- assert(self._is_indexed_like(other))
+ if not self._is_indexed_like(other):
+ raise AssertionError('Must have same axes to merge managers')
this, other = self._maybe_rename_join(other, lsuffix, rsuffix)
@@ -1157,7 +1181,9 @@ def _is_indexed_like(self, other):
"""
Check all axes except items
"""
- assert(self.ndim == other.ndim)
+ if self.ndim != other.ndim:
+ raise AssertionError(('Number of dimensions must agree '
+ 'got %d and %d') % (self.ndim, other.ndim))
for ax, oax in zip(self.axes[1:], other.axes[1:]):
if not ax.equals(oax):
return False
@@ -1165,7 +1191,8 @@ def _is_indexed_like(self, other):
def rename_axis(self, mapper, axis=1):
new_axis = Index([mapper(x) for x in self.axes[axis]])
- assert(new_axis.is_unique)
+ if not new_axis.is_unique:
+ raise AssertionError('New axis must be unique to rename')
new_axes = list(self.axes)
new_axes[axis] = new_axis
@@ -1231,10 +1258,12 @@ def block_id_vector(self):
for i, blk in enumerate(self.blocks):
indexer = self.items.get_indexer(blk.items)
- assert((indexer != -1).all())
+ if (indexer == -1).any():
+ raise AssertionError('Block items must be in manager items')
result.put(indexer, i)
- assert((result >= 0).all())
+ if (result < 0).any():
+ raise AssertionError('Some items were not in any block')
return result
@property
@@ -1245,7 +1274,8 @@ def item_dtypes(self):
indexer = self.items.get_indexer(blk.items)
result.put(indexer, blk.values.dtype.name)
mask.put(indexer, 1)
- assert(mask.all())
+ if not (mask.all()):
+ raise AssertionError('Some items were not in any block')
return result
def form_blocks(arrays, names, axes):
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 8c6119fc851ab..f7d732507dacd 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -388,7 +388,8 @@ def nancorr(a, b, method='pearson'):
"""
a, b: ndarrays
"""
- assert(len(a) == len(b))
+ if len(a) != len(b):
+ raise AssertionError('Operands to nancorr must have same size')
valid = notnull(a) & notnull(b)
if not valid.all():
@@ -427,7 +428,8 @@ def _spearman(a, b):
def nancov(a, b):
- assert(len(a) == len(b))
+ if len(a) != len(b):
+ raise AssertionError('Operands to nancov must have same size')
valid = notnull(a) & notnull(b)
if not valid.all():
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 42adf0420db0d..2dca8a2aef801 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -590,7 +590,9 @@ def __setitem__(self, key, value):
columns=self.minor_axis)
mat = value.values
elif isinstance(value, np.ndarray):
- assert(value.shape == (N, K))
+ if value.shape != (N, K):
+ raise AssertionError(('Shape of values must be (%d, %d), '
+ 'not (%d, %d)') % ((N, K) + values.shape))
mat = np.asarray(value)
elif np.isscalar(value):
dtype = _infer_dtype(value)
@@ -1394,7 +1396,8 @@ def _prep_ndarray(values, copy=True):
else:
if copy:
values = values.copy()
- assert(values.ndim == 3)
+ if values.ndim != 3:
+ raise AssertionError('Number of dimensions must be 3')
return values
@@ -1461,7 +1464,8 @@ def _extract_axis(data, axis=0, intersect=False):
raise ValueError('ndarrays must match shape on axis %d' % axis)
if have_frames:
- assert(lengths[0] == len(index))
+ if lengths[0] != len(index):
+ raise AssertionError('Length of data and index must match')
else:
index = Index(np.arange(lengths[0]))
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 5d611bdea3cff..09946c9e06bcd 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -317,7 +317,9 @@ def pivot_simple(index, columns, values):
-------
DataFrame
"""
- assert(len(index) == len(columns) == len(values))
+ if (len(index) != len(columns)) or (len(columns) != len(values)):
+ raise AssertionError('Length of index, columns, and values must be the'
+ ' same')
if len(index) == 0:
return DataFrame(index=[])
diff --git a/pandas/core/series.py b/pandas/core/series.py
index fb5c40dc1c216..f76034fef93e0 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1623,7 +1623,8 @@ def _binop(self, other, func, level=None, fill_value=None):
-------
combined : Series
"""
- assert(isinstance(other, Series))
+ if not isinstance(other, Series):
+ raise AssertionError('Other operand must be Series')
new_index = self.index
this = self
| https://api.github.com/repos/pandas-dev/pandas/pulls/2293 | 2012-11-19T20:46:08Z | 2012-11-20T19:36:00Z | null | 2014-06-12T16:07:43Z | |
BUG: dtype=object should stop conversion from object in frame constructo... | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 14b435e0aafc8..04f72de80d500 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -402,8 +402,7 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
index = _get_names_from_index(data)
if isinstance(data[0], (list, tuple, dict, Series)):
- arrays, columns = _to_arrays(data, columns)
-
+ arrays, columns = _to_arrays(data, columns, dtype=dtype)
columns = _ensure_index(columns)
if index is None:
@@ -5159,7 +5158,7 @@ def _rec_to_dict(arr):
return columns, sdict
-def _to_arrays(data, columns, coerce_float=False):
+def _to_arrays(data, columns, coerce_float=False, dtype=None):
"""
Return list of arrays, columns
"""
@@ -5167,30 +5166,35 @@ def _to_arrays(data, columns, coerce_float=False):
if len(data) == 0:
return [], columns if columns is not None else []
if isinstance(data[0], (list, tuple)):
- return _list_to_arrays(data, columns, coerce_float=coerce_float)
+ return _list_to_arrays(data, columns, coerce_float=coerce_float,
+ dtype=dtype)
elif isinstance(data[0], dict):
return _list_of_dict_to_arrays(data, columns,
- coerce_float=coerce_float)
+ coerce_float=coerce_float,
+ dtype=dtype)
elif isinstance(data[0], Series):
return _list_of_series_to_arrays(data, columns,
- coerce_float=coerce_float)
+ coerce_float=coerce_float,
+ dtype=dtype)
else:
# last ditch effort
data = map(tuple, data)
- return _list_to_arrays(data, columns, coerce_float=coerce_float)
+ return _list_to_arrays(data, columns,
+ coerce_float=coerce_float,
+ dtype=dtype)
-def _list_to_arrays(data, columns, coerce_float=False):
+def _list_to_arrays(data, columns, coerce_float=False, dtype=None):
if len(data) > 0 and isinstance(data[0], tuple):
content = list(lib.to_object_array_tuples(data).T)
else:
# list of lists
content = list(lib.to_object_array(data).T)
- return _convert_object_array(content, columns,
+ return _convert_object_array(content, columns, dtype=dtype,
coerce_float=coerce_float)
-def _list_of_series_to_arrays(data, columns, coerce_float=False):
+def _list_of_series_to_arrays(data, columns, coerce_float=False, dtype=None):
from pandas.core.index import _get_combined_index
if columns is None:
@@ -5211,13 +5215,13 @@ def _list_of_series_to_arrays(data, columns, coerce_float=False):
if values.dtype == np.object_:
content = list(values.T)
- return _convert_object_array(content, columns,
+ return _convert_object_array(content, columns, dtype=dtype,
coerce_float=coerce_float)
else:
return values.T, columns
-def _list_of_dict_to_arrays(data, columns, coerce_float=False):
+def _list_of_dict_to_arrays(data, columns, coerce_float=False, dtype=None):
if columns is None:
gen = (x.keys() for x in data)
columns = lib.fast_unique_multiple_list_gen(gen)
@@ -5228,11 +5232,11 @@ def _list_of_dict_to_arrays(data, columns, coerce_float=False):
for d in data]
content = list(lib.dicts_to_array(data, list(columns)).T)
- return _convert_object_array(content, columns,
+ return _convert_object_array(content, columns, dtype=dtype,
coerce_float=coerce_float)
-def _convert_object_array(content, columns, coerce_float=False):
+def _convert_object_array(content, columns, coerce_float=False, dtype=None):
if columns is None:
columns = _default_index(len(content))
else:
@@ -5241,6 +5245,7 @@ def _convert_object_array(content, columns, coerce_float=False):
'columns' % (len(columns), len(content)))
arrays = [lib.maybe_convert_objects(arr, try_float=coerce_float)
+ if dtype != object and dtype != np.object else arr
for arr in content]
return arrays, columns
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 042c744ef167a..ce9bfc2af1198 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1730,6 +1730,12 @@ def test_constructor_dtype_nocast_view(self):
should_be_view[0][0] = 97
self.assertEqual(df.values[0, 0], 97)
+ def test_constructor_dtype_list_data(self):
+ df = DataFrame([[1, '2'],
+ [None, 'a']], dtype=object)
+ self.assert_(df.ix[1, 0] is None)
+ self.assert_(df.ix[0, 1] == '2')
+
def test_constructor_rec(self):
rec = self.frame.to_records(index=False)
| ...r #2255
don't call maybe_convert_objects in _convert_object_array if dtype is object.
Note that it also stops other type conversions too.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2291 | 2012-11-19T19:22:53Z | 2012-11-24T00:31:38Z | 2012-11-24T00:31:38Z | 2014-06-25T01:09:13Z |
Support for customizing parallel_plot() x axis tickmarks | diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 0724799ced6f2..b4bfab7a5d8dd 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -460,6 +460,15 @@ def test_parallel_coordinates(self):
path = os.path.join(curpath(), 'data/iris.csv')
df = read_csv(path)
_check_plot_works(parallel_coordinates, df, 'Name')
+ _check_plot_works(parallel_coordinates, df, 'Name',
+ colors=('#556270', '#4ECDC4', '#C7F464'))
+ _check_plot_works(parallel_coordinates, df, 'Name',
+ colors=['dodgerblue', 'aquamarine', 'seagreen'])
+
+ df = read_csv(path, header=None, skiprows=1, names=[1,2,4,8, 'Name'])
+ _check_plot_works(parallel_coordinates, df, 'Name', use_columns=True)
+ _check_plot_works(parallel_coordinates, df, 'Name',
+ xticks=[1, 5, 25, 125])
@slow
def test_radviz(self):
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 2e6faf5eb9362..aec7081c57352 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -411,20 +411,41 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
return fig
-def parallel_coordinates(data, class_column, cols=None, ax=None, **kwds):
+def parallel_coordinates(data, class_column, cols=None, ax=None, colors=None,
+ use_columns=False, xticks=None, **kwds):
"""Parallel coordinates plotting.
- Parameters:
- -----------
- data: A DataFrame containing data to be plotted
- class_column: Column name containing class names
- cols: A list of column names to use, optional
- ax: matplotlib axis object, optional
- kwds: A list of keywords for matplotlib plot method
+ Parameters
+ ----------
+ data: DataFrame
+ A DataFrame containing data to be plotted
+ class_column: str
+ Column name containing class names
+ cols: list, optional
+ A list of column names to use
+ ax: matplotlib.axis, optional
+ matplotlib axis object
+ colors: list or tuple, optional
+ Colors to use for the different classes
+ use_columns: bool, optional
+ If true, columns will be used as xticks
+ xticks: list or tuple, optional
+ A list of values to use for xticks
+ kwds: list, optional
+ A list of keywords for matplotlib plot method
- Returns:
- --------
+ Returns
+ -------
ax: matplotlib axis object
+
+ Examples
+ --------
+ >>> from pandas import read_csv
+ >>> from pandas.tools.plotting import parallel_coordinates
+ >>> from matplotlib import pyplot as plt
+ >>> df = read_csv('https://raw.github.com/pydata/pandas/master/pandas/tests/data/iris.csv')
+ >>> parallel_coordinates(df, 'Name', colors=('#556270', '#4ECDC4', '#C7F464'))
+ >>> plt.show()
"""
import matplotlib.pyplot as plt
import random
@@ -444,11 +465,32 @@ def random_color(column):
used_legends = set([])
ncols = len(df.columns)
- x = range(ncols)
+
+ # determine values to use for xticks
+ if use_columns is True:
+ if not np.all(np.isreal(list(df.columns))):
+ raise ValueError('Columns must be numeric to be used as xticks')
+ x = df.columns
+ elif xticks is not None:
+ if not np.all(np.isreal(xticks)):
+ raise ValueError('xticks specified must be numeric')
+ elif len(xticks) != ncols:
+ raise ValueError('Length of xticks must match number of columns')
+ x = xticks
+ else:
+ x = range(ncols)
if ax == None:
ax = plt.gca()
+ # if user has not specified colors to use, choose at random
+ if colors is None:
+ colors = dict((kls, random_color(kls)) for kls in classes)
+ else:
+ if len(colors) != len(classes):
+ raise ValueError('Number of colors must match number of classes')
+ colors = dict((kls, colors[i]) for i, kls in enumerate(classes))
+
for i in range(n):
row = df.irow(i).values
y = row
@@ -456,16 +498,17 @@ def random_color(column):
if com.pprint_thing(kls) not in used_legends:
label = com.pprint_thing(kls)
used_legends.add(label)
- ax.plot(x, y, color=random_color(kls),
+ ax.plot(x, y, color=colors[kls],
label=label, **kwds)
else:
- ax.plot(x, y, color=random_color(kls), **kwds)
+ ax.plot(x, y, color=colors[kls], **kwds)
- for i in range(ncols):
+ for i in x:
ax.axvline(i, linewidth=1, color='black')
ax.set_xticks(x)
ax.set_xticklabels(df.columns)
+ ax.set_xlim(x[0], x[-1])
ax.legend(loc='upper right')
ax.grid()
return ax
| If the columns are already numeric, then you can simply use "use_columns=True" to have x-axis scaled according to those values. Otherwise the "xticks" parameter can be used to manually specify xticks to use.
Feedback or suggestions are welcome!
| https://api.github.com/repos/pandas-dev/pandas/pulls/2287 | 2012-11-19T03:26:26Z | 2012-12-11T20:34:24Z | 2012-12-11T20:34:24Z | 2012-12-11T20:34:24Z |
API: fillna method argument should be None by default and raise on both ... | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index bb0e4512ea267..6ed54ab67026d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3041,7 +3041,7 @@ def reorder_levels(self, order, axis=0):
#----------------------------------------------------------------------
# Filling NA's
- def fillna(self, value=None, method='pad', axis=0, inplace=False,
+ def fillna(self, value=None, method=None, axis=0, inplace=False,
limit=None):
"""
Fill NA/NaN values using the specified method
@@ -3078,7 +3078,11 @@ def fillna(self, value=None, method='pad', axis=0, inplace=False,
self._consolidate_inplace()
if value is None:
+ if method is None:
+ raise ValueError('must specify a fill method or value')
if self._is_mixed_type and axis == 1:
+ if inplace:
+ raise NotImplementedError()
return self.T.fillna(method=method, limit=limit).T
new_blocks = []
@@ -3093,6 +3097,8 @@ def fillna(self, value=None, method='pad', axis=0, inplace=False,
new_data = BlockManager(new_blocks, self._data.axes)
else:
+ if method is not None:
+ raise ValueError('cannot specify both a fill method and value')
# Float type values
if len(self.columns) == 0:
return self
@@ -3117,6 +3123,14 @@ def fillna(self, value=None, method='pad', axis=0, inplace=False,
else:
return self._constructor(new_data)
+ def ffill(self, axis=0, inplace=False, limit=None):
+ return self.fillna(method='ffill', axis=axis, inplace=inplace,
+ limit=limit)
+
+ def bfill(self, axis=0, inplace=False, limit=None):
+ return self.fillna(method='bfill', axis=axis, inplace=inplace,
+ limit=limit)
+
def replace(self, to_replace, value=None, method='pad', axis=0,
inplace=False, limit=None):
"""
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 42adf0420db0d..fb47b7aa2f102 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -863,7 +863,7 @@ def _combine_panel(self, other, func):
return self._constructor(result_values, items, major, minor)
- def fillna(self, value=None, method='pad'):
+ def fillna(self, value=None, method=None):
"""
Fill NaN values using the specified method.
@@ -889,15 +889,27 @@ def fillna(self, value=None, method='pad'):
DataFrame.reindex, DataFrame.asfreq
"""
if value is None:
+ if method is None:
+ raise ValueError('must specify a fill method or value')
result = {}
for col, s in self.iterkv():
result[col] = s.fillna(method=method, value=value)
return self._constructor.from_dict(result)
else:
+ if method is not None:
+ raise ValueError('cannot specify both a fill method and value')
new_data = self._data.fillna(value)
return self._constructor(new_data)
+
+ def ffill(self):
+ return self.fillna(method='ffill')
+
+ def bfill(self):
+ return self.fillna(method='bfill')
+
+
add = _panel_arith_method(operator.add, 'add')
subtract = sub = _panel_arith_method(operator.sub, 'subtract')
multiply = mul = _panel_arith_method(operator.mul, 'multiply')
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 140a2f1f22a05..f89bd0ed3275e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2217,7 +2217,7 @@ def take(self, indices, axis=0):
truncate = generic.truncate
- def fillna(self, value=None, method='pad', inplace=False,
+ def fillna(self, value=None, method=None, inplace=False,
limit=None):
"""
Fill NA/NaN values using the specified method
@@ -2249,12 +2249,14 @@ def fillna(self, value=None, method='pad', inplace=False,
return self.copy() if not inplace else self
if value is not None:
+ if method is not None:
+ raise ValueError('Cannot specify both a fill value and method')
result = self.copy() if not inplace else self
mask = isnull(self.values)
np.putmask(result, mask, value)
else:
if method is None: # pragma: no cover
- raise ValueError('must specify a fill method')
+ raise ValueError('must specify a fill method or value')
fill_f = _get_fill_func(method)
@@ -2272,6 +2274,12 @@ def fillna(self, value=None, method='pad', inplace=False,
return result
+ def ffill(self, inplace=False, limit=None):
+ return self.fillna(method='ffill', inplace=inplace, limit=limit)
+
+ def bfill(self, inplace=False, limit=None):
+ return self.fillna(method='bfill', inplace=inplace, limit=limit)
+
def replace(self, to_replace, value=None, method='pad', inplace=False,
limit=None):
"""
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index c38936b55696f..40b9ad704dc47 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -808,7 +808,7 @@ def applymap(self, func):
return self.apply(lambda x: map(func, x))
@Appender(DataFrame.fillna.__doc__)
- def fillna(self, value=None, method='pad', inplace=False, limit=None):
+ def fillna(self, value=None, method=None, inplace=False, limit=None):
new_series = {}
for k, v in self.iterkv():
new_series[k] = v.fillna(value=value, method=method, limit=limit)
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 8be9e2b5c7d75..910e5de86334d 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -438,7 +438,7 @@ def sparse_reindex(self, new_index):
fill_value=self.fill_value)
@Appender(Series.fillna.__doc__)
- def fillna(self, value=None, method='pad', inplace=False, limit=None):
+ def fillna(self, value=None, method=None, inplace=False, limit=None):
dense = self.to_dense()
filled = dense.fillna(value=value, method=method, limit=limit)
result = filled.to_sparse(kind=self.kind,
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 1be1203480ed1..b3323edd8d684 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -4567,6 +4567,23 @@ def test_fillna(self):
result = self.mixed_frame.fillna(value=0)
+ self.assertRaises(ValueError, self.tsframe.fillna)
+ self.assertRaises(ValueError, self.tsframe.fillna, 5, method='ffill')
+
+ def test_ffill(self):
+ self.tsframe['A'][:5] = nan
+ self.tsframe['A'][-5:] = nan
+
+ assert_frame_equal(self.tsframe.ffill(),
+ self.tsframe.fillna(method='ffill'))
+
+ def test_bfill(self):
+ self.tsframe['A'][:5] = nan
+ self.tsframe['A'][-5:] = nan
+
+ assert_frame_equal(self.tsframe.bfill(),
+ self.tsframe.fillna(method='bfill'))
+
def test_fillna_skip_certain_blocks(self):
# don't try to fill boolean, int blocks
@@ -4589,10 +4606,10 @@ def test_fillna_inplace(self):
df[1][:4] = np.nan
df[3][-4:] = np.nan
- expected = df.fillna()
+ expected = df.fillna(method='ffill')
self.assert_(expected is not df)
- df2 = df.fillna(inplace=True)
+ df2 = df.fillna(method='ffill', inplace=True)
self.assert_(df2 is df)
assert_frame_equal(df2, expected)
@@ -4623,13 +4640,13 @@ def test_fillna_columns(self):
df = DataFrame(np.random.randn(10, 10))
df.values[:, ::2] = np.nan
- result = df.fillna(axis=1)
+ result = df.fillna(method='ffill', axis=1)
expected = df.T.fillna(method='pad').T
assert_frame_equal(result, expected)
df.insert(6, 'foo', 5)
- result = df.fillna(axis=1)
- expected = df.astype(float).fillna(axis=1)
+ result = df.fillna(method='ffill', axis=1)
+ expected = df.astype(float).fillna(method='ffill', axis=1)
assert_frame_equal(result, expected)
def test_fillna_invalid_method(self):
@@ -7317,7 +7334,8 @@ def test_fillna_col_reordering(self):
cols = ["COL." + str(i) for i in range(5, 0, -1)]
data = np.random.rand(20, 5)
df = DataFrame(index=range(20), columns=cols, data=data)
- self.assert_(df.columns.tolist() == df.fillna().columns.tolist())
+ filled = df.fillna(method='ffill')
+ self.assert_(df.columns.tolist() == filled.columns.tolist())
def test_take(self):
# homogeneous
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 82c6ea65d133a..5cc3d4db5bd04 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -950,6 +950,15 @@ def test_fillna(self):
filled = empty.fillna(0)
assert_panel_equal(filled, empty)
+ self.assertRaises(ValueError, self.panel.fillna)
+ self.assertRaises(ValueError, self.panel.fillna, 5, method='ffill')
+
+ def test_ffill_bfill(self):
+ assert_panel_equal(self.panel.ffill(),
+ self.panel.fillna(method='ffill'))
+ assert_panel_equal(self.panel.bfill(),
+ self.panel.fillna(method='bfill'))
+
def test_truncate_fillna_bug(self):
# #1823
result = self.panel.truncate(before=None, after=None, axis='items')
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index a906489e67b57..0676c9b724393 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2818,8 +2818,8 @@ def test_isin(self):
def test_fillna_int(self):
s = Series(np.random.randint(-100, 100, 50))
- self.assert_(s.fillna(inplace=True) is s)
- assert_series_equal(s.fillna(inplace=False), s)
+ self.assert_(s.fillna(method='ffill', inplace=True) is s)
+ assert_series_equal(s.fillna(method='ffill', inplace=False), s)
#-------------------------------------------------------------------------------
# TimeSeries-specific
@@ -2827,16 +2827,19 @@ def test_fillna_int(self):
def test_fillna(self):
ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5))
- self.assert_(np.array_equal(ts, ts.fillna()))
+ self.assert_(np.array_equal(ts, ts.fillna(method='ffill')))
ts[2] = np.NaN
- self.assert_(np.array_equal(ts.fillna(), [0., 1., 1., 3., 4.]))
+ self.assert_(np.array_equal(ts.fillna(method='ffill'), [0., 1., 1., 3., 4.]))
self.assert_(np.array_equal(ts.fillna(method='backfill'),
[0., 1., 3., 3., 4.]))
self.assert_(np.array_equal(ts.fillna(value=5), [0., 1., 5., 3., 4.]))
+ self.assertRaises(ValueError, ts.fillna)
+ self.assertRaises(ValueError, self.ts.fillna, value=0, method='ffill')
+
def test_fillna_bug(self):
x = Series([nan, 1., nan, 3., nan],['z','a','b','c','d'])
filled = x.fillna(method='ffill')
@@ -2863,6 +2866,16 @@ def test_fillna_invalid_method(self):
except ValueError, inst:
self.assert_('ffil' in str(inst))
+ def test_ffill(self):
+ ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5))
+ ts[2] = np.NaN
+ assert_series_equal(ts.ffill(), ts.fillna(method='ffill'))
+
+ def test_bfill(self):
+ ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5))
+ ts[2] = np.NaN
+ assert_series_equal(ts.bfill(), ts.fillna(method='bfill'))
+
def test_replace(self):
N = 100
ser = Series(np.random.randn(N))
| ...value and method #2027
@wesm what do you think of the API change? If you're ok with it I'll propagate to DataFrame and Panel.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2284 | 2012-11-18T22:28:48Z | 2012-11-30T23:46:12Z | null | 2014-06-12T07:17:06Z |
ENH: google analytics integration using oauth2 | diff --git a/.gitignore b/.gitignore
index d17c869c4c1ba..320f03a0171a2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -23,3 +23,5 @@ scikits
pandas.egg-info
*\#*\#
.tox
+pandas/io/*.dat
+pandas/io/*.json
\ No newline at end of file
diff --git a/pandas/io/auth.py b/pandas/io/auth.py
new file mode 100644
index 0000000000000..471436cb1b6bf
--- /dev/null
+++ b/pandas/io/auth.py
@@ -0,0 +1,122 @@
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import sys
+import logging
+
+import httplib2
+
+import apiclient.discovery as gapi
+import gflags
+import oauth2client.file as auth_file
+import oauth2client.client as oauth
+import oauth2client.tools as tools
+OOB_CALLBACK_URN = oauth.OOB_CALLBACK_URN
+
+class AuthenticationConfigError(ValueError):
+ pass
+
+FLOWS = {}
+FLAGS = gflags.FLAGS
+DEFAULT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secrets.json')
+DEFAULT_SCOPE = 'https://www.googleapis.com/auth/analytics.readonly'
+DEFAULT_TOKEN_FILE = os.path.join(os.path.dirname(__file__), 'analytics.dat')
+MISSING_CLIENT_MSG = """
+WARNING: Please configure OAuth 2.0
+
+You need to populate the client_secrets.json file found at:
+
+ %s
+
+with information from the APIs Console <https://code.google.com/apis/console>.
+
+"""
+DOC_URL = ('https://developers.google.com/api-client-library/python/guide/'
+ 'aaa_client_secrets')
+
+gflags.DEFINE_enum('logging_level', 'ERROR',
+ ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
+ 'Set the level of logging detail.')
+
+# Name of file that will store the access and refresh tokens to access
+# the API without having to login each time. Make sure this file is in
+# a secure place.
+
+def process_flags(flags=[]):
+ """Uses the command-line flags to set the logging level.
+
+ Args:
+ argv: List of command line arguments passed to the python script.
+ """
+
+ # Let the gflags module process the command-line arguments.
+ try:
+ FLAGS(flags)
+ except gflags.FlagsError, e:
+ print '%s\nUsage: %s ARGS\n%s' % (e, str(flags), FLAGS)
+ sys.exit(1)
+
+ # Set the logging according to the command-line flag.
+ logging.getLogger().setLevel(getattr(logging, FLAGS.logging_level))
+
+def get_flow(secret, scope, redirect):
+ """
+ Retrieve an authentication flow object based on the given
+ configuration in the secret file name, the authentication scope,
+ and a redirect URN
+ """
+ key = (secret, scope, redirect)
+ flow = FLOWS.get(key, None)
+ if flow is None:
+ msg = MISSING_CLIENT_MSG % secret
+ if not os.path.exists(secret):
+ raise AuthenticationConfigError(msg)
+ flow = oauth.flow_from_clientsecrets(secret, scope,
+ redirect_uri=redirect,
+ message=msg)
+ FLOWS[key] = flow
+ return flow
+
+def make_token_store(fpath=None):
+ """create token storage from give file name"""
+ if fpath is None:
+ fpath = DEFAULT_TOKEN_FILE
+ return auth_file.Storage(fpath)
+
+def authenticate(flow, storage=None):
+ """
+ Try to retrieve a valid set of credentials from the token store if possible
+ Otherwise use the given authentication flow to obtain new credentials
+ and return an authenticated http object
+
+ Parameters
+ ----------
+ flow : authentication workflow
+ storage: token storage, default None
+ """
+ http = httplib2.Http()
+
+ # Prepare credentials, and authorize HTTP object with them.
+ credentials = storage.get()
+ if credentials is None or credentials.invalid:
+ credentials = tools.run(flow, storage)
+
+ http = credentials.authorize(http)
+ return http
+
+def init_service(http):
+ """
+ Use the given http object to build the analytics service object
+ """
+ return gapi.build('analytics', 'v3', http=http)
diff --git a/pandas/io/ga.py b/pandas/io/ga.py
new file mode 100644
index 0000000000000..a433a4add7478
--- /dev/null
+++ b/pandas/io/ga.py
@@ -0,0 +1,429 @@
+"""
+1. Goto https://code.google.com/apis/console
+2. Create new project
+3. Goto APIs and register for OAuth2.0 for installed applications
+4. Download JSON secret file and move into same directory as this file
+"""
+from datetime import datetime
+import numpy as np
+from pandas import DataFrame
+import pandas as pd
+import pandas.io.parsers as psr
+import pandas.lib as lib
+from pandas.io.date_converters import generic_parser
+import pandas.io.auth as auth
+from pandas.util.decorators import Appender, Substitution
+
+from apiclient.errors import HttpError
+from oauth2client.client import AccessTokenRefreshError
+
+TYPE_MAP = {u'INTEGER': int, u'FLOAT': float, u'TIME': int}
+
+NO_CALLBACK = auth.OOB_CALLBACK_URN
+DOC_URL = auth.DOC_URL
+
+_QUERY_PARAMS = """metrics : list of str
+ Un-prefixed metric names (e.g., 'visitors' and not 'ga:visitors')
+dimensions : list of str
+ Un-prefixed dimension variable names
+start_date : str/date/datetime
+end_date : str/date/datetime, optional
+ Defaults to today
+segment : list of str, optional
+filters : list of str, optional
+start_index : int, default 1
+max_results : int, default 10000
+ If >10000, must specify chunksize or ValueError will be raised"""
+
+_QUERY_DOC = """
+Construct a google analytics query using given parameters
+Metrics and dimensions do not need the 'ga:' prefix
+
+Parameters
+----------
+profile_id : str
+%s
+""" % _QUERY_PARAMS
+
+_GA_READER_DOC = """Given query parameters, return a DataFrame with all the data
+or an iterator that returns DataFrames containing chunks of the data
+
+Parameters
+----------
+%s
+sort : bool/list, default True
+ Sort output by index or list of columns
+chunksize : int, optional
+ If max_results >10000, specifies the number of rows per iteration
+index_col : str/list of str/dict, optional
+ If unspecified then dimension variables are set as index
+parse_dates : bool/list/dict, default True
+keep_date_col : boolean, default False
+date_parser : optional
+na_values : optional
+converters : optional
+dayfirst : bool, default False
+ Informs date parsing
+account_name : str, optional
+account_id : str, optional
+property_name : str, optional
+property_id : str, optional
+profile_name : str, optional
+profile_id : str, optional
+%%(extras)s
+Returns
+-------
+data : DataFrame or DataFrame yielding iterator
+""" % _QUERY_PARAMS
+
+_AUTH_PARAMS = """secrets : str, optional
+ File path to the secrets file
+scope : str, optional
+ Authentication scope
+token_file_name : str, optional
+ Path to token storage
+redirect : str, optional
+ Local host redirect if unspecified
+"""
+
+@Substitution(extras=_AUTH_PARAMS)
+@Appender(_GA_READER_DOC)
+def read_ga(metrics, dimensions, start_date, **kwargs):
+ lst = ['secrets', 'scope', 'token_file_name', 'redirect']
+ reader_kwds = dict((p, kwargs.pop(p)) for p in lst if p in kwargs)
+ reader = GAnalytics(**reader_kwds)
+ return reader.get_data(metrics=metrics, start_date=start_date,
+ dimensions=dimensions, **kwargs)
+
+class OAuthDataReader(object):
+ """
+ Abstract class for handling OAuth2 authentication using the Google
+ oauth2client library
+ """
+ def __init__(self, scope, token_file_name, redirect):
+ """
+ Parameters
+ ----------
+ scope : str
+ Designates the authentication scope
+ token_file_name : str
+ Location of cache for authenticated tokens
+ redirect : str
+ Redirect URL
+ """
+ self.scope = scope
+ self.token_store = auth.make_token_store(token_file_name)
+ self.redirect_url = redirect
+
+ def authenticate(self, secrets):
+ """
+ Run the authentication process and return an authorized
+ http object
+
+ Parameters
+ ----------
+ secrets : str
+ File name for client secrets
+
+ Notes
+ -----
+ See google documention for format of secrets file
+ %s
+ """ % DOC_URL
+ flow = self._create_flow(secrets)
+ return auth.authenticate(flow, self.token_store)
+
+ def _create_flow(self, secrets):
+ """
+ Create an authentication flow based on the secrets file
+
+ Parameters
+ ----------
+ secrets : str
+ File name for client secrets
+
+ Notes
+ -----
+ See google documentation for format of secrets file
+ %s
+ """ % DOC_URL
+ return auth.get_flow(secrets, self.scope, self.redirect_url)
+
+
+class GDataReader(OAuthDataReader):
+ """
+ Abstract class for reading data from google APIs using OAuth2
+ Subclasses must implement create_query method
+ """
+ def __init__(self, scope=auth.DEFAULT_SCOPE,
+ token_file_name=auth.DEFAULT_TOKEN_FILE,
+ redirect=NO_CALLBACK, secrets=auth.DEFAULT_SECRETS):
+ super(GDataReader, self).__init__(scope, token_file_name, redirect)
+ self._service = self._init_service(secrets)
+
+ @property
+ def service(self):
+ """The authenticated request service object"""
+ return self._service
+
+ def _init_service(self, secrets):
+ """
+ Build an authenticated google api request service using the given
+ secrets file
+ """
+ http = self.authenticate(secrets)
+ return auth.init_service(http)
+
+ def get_account(self, name=None, id=None, **kwargs):
+ """
+ Retrieve an account that matches the name, id, or some account attribute
+ specified in **kwargs
+
+ Parameters
+ ----------
+ name : str, optional
+ id : str, optional
+ """
+ accounts = self.service.management().accounts().list().execute()
+ return _get_match(accounts, name, id, **kwargs)
+
+ def get_web_property(self, account_id=None, name=None, id=None, **kwargs):
+ """
+ Retrieve a web property given and account and property name, id, or
+ custom attribute
+
+ Parameters
+ ----------
+ account_id : str, optional
+ name : str, optional
+ id : str, optional
+ """
+ prop_store = self.service.management().webproperties()
+ kwds = {}
+ if account_id is not None:
+ kwds['accountId'] = account_id
+ prop_for_acct = prop_store.list(**kwds).execute()
+ return _get_match(prop_for_acct, name, id, **kwargs)
+
+ def get_profile(self, account_id=None, web_property_id=None, name=None,
+ id=None, **kwargs):
+
+ """
+ Retrieve the right profile for the given account, web property, and
+ profile attribute (name, id, or arbitrary parameter in kwargs)
+
+ Parameters
+ ----------
+ account_id : str, optional
+ web_property_id : str, optional
+ name : str, optional
+ id : str, optional
+ """
+ profile_store = self.service.management().profiles()
+ kwds = {}
+ if account_id is not None:
+ kwds['accountId'] = account_id
+ if web_property_id is not None:
+ kwds['webPropertyId'] = web_property_id
+ profiles = profile_store.list(**kwds).execute()
+ return _get_match(profiles, name, id, **kwargs)
+
+ def create_query(self, *args, **kwargs):
+ raise NotImplementedError()
+
+ @Substitution(extras='')
+ @Appender(_GA_READER_DOC)
+ def get_data(self, metrics, start_date, end_date=None,
+ dimensions=None, segment=None, filters=None, start_index=1,
+ max_results=10000, index_col=None, parse_dates=True,
+ keep_date_col=False, date_parser=None, na_values=None,
+ converters=None, sort=True, dayfirst=False,
+ account_name=None, account_id=None, property_name=None,
+ property_id=None, profile_name=None, profile_id=None,
+ chunksize=None):
+ if chunksize is None and max_results > 10000:
+ raise ValueError('Google API returns maximum of 10,000 rows, '
+ 'please set chunksize')
+
+ account = self.get_account(account_name, account_id)
+ web_property = self.get_web_property(account.get('id'), property_name,
+ property_id)
+ profile = self.get_profile(account.get('id'), web_property.get('id'),
+ profile_name, profile_id)
+
+ profile_id = profile.get('id')
+
+ if index_col is None and dimensions is not None:
+ if isinstance(dimensions, basestring):
+ dimensions = [dimensions]
+ index_col = _clean_index(list(dimensions), parse_dates)
+
+ def _read(start, result_size):
+ query = self.create_query(profile_id, metrics, start_date,
+ end_date=end_date, dimensions=dimensions,
+ segment=segment, filters=filters,
+ start_index=start,
+ max_results=result_size)
+
+ try:
+ rs = query.execute()
+ rows = rs.get('rows', [])
+ col_info = rs.get('columnHeaders', [])
+ return self._parse_data(rows, col_info, index_col,
+ parse_dates=parse_dates,
+ keep_date_col=keep_date_col,
+ date_parser=date_parser,
+ dayfirst=dayfirst,
+ na_values=na_values,
+ converters=converters, sort=sort)
+ except HttpError, inst:
+ raise ValueError('Google API error %s: %s' % (inst.resp.status,
+ inst._get_reason()))
+
+
+ if chunksize is None:
+ return _read(start_index, max_results)
+
+ def iterator():
+ curr_start = start_index
+
+ while curr_start < max_results:
+ yield _read(curr_start, chunksize)
+ curr_start += chunksize
+ return iterator()
+
+ def _parse_data(self, rows, col_info, index_col, parse_dates=True,
+ keep_date_col=False, date_parser=None, dayfirst=False,
+ na_values=None, converters=None, sort=True):
+ # TODO use returned column types
+ col_names = _get_col_names(col_info)
+ df = psr._read(rows, dict(index_col=index_col, parse_dates=parse_dates,
+ date_parser=date_parser, dayfirst=dayfirst,
+ na_values=na_values,
+ keep_date_col=keep_date_col,
+ converters=converters,
+ header=None, names=col_names))
+
+ if isinstance(sort, bool) and sort:
+ return df.sort_index()
+ elif isinstance(sort, (basestring, list, tuple, np.ndarray)):
+ return df.sort_index(by=sort)
+
+ return df
+
+
+class GAnalytics(GDataReader):
+
+ @Appender(_QUERY_DOC)
+ def create_query(self, profile_id, metrics, start_date, end_date=None,
+ dimensions=None, segment=None, filters=None,
+ start_index=None, max_results=10000, **kwargs):
+ qry = format_query(profile_id, metrics, start_date, end_date=end_date,
+ dimensions=dimensions, segment=segment,
+ filters=filters, start_index=start_index,
+ max_results=max_results, **kwargs)
+ try:
+ return self.service.data().ga().get(**qry)
+ except TypeError, error:
+ raise ValueError('Error making query: %s' % error)
+
+
+def format_query(ids, metrics, start_date, end_date=None, dimensions=None,
+ segment=None, filters=None, sort=None, start_index=None,
+ max_results=10000, **kwargs):
+ if isinstance(metrics, basestring):
+ metrics = [metrics]
+ met =','.join(['ga:%s' % x for x in metrics])
+
+ start_date = pd.to_datetime(start_date).strftime('%Y-%m-%d')
+ if end_date is None:
+ end_date = datetime.today()
+ end_date = pd.to_datetime(end_date).strftime('%Y-%m-%d')
+
+ qry = dict(ids='ga:%s' % str(ids),
+ metrics=met,
+ start_date=start_date,
+ end_date=end_date)
+ qry.update(kwargs)
+
+ names = ['dimensions', 'segment', 'filters', 'sort']
+ lst = [dimensions, segment, filters, sort]
+ [_maybe_add_arg(qry, n, d) for n, d in zip(names, lst)]
+
+ if start_index is not None:
+ qry['start_index'] = str(start_index)
+
+ if max_results is not None:
+ qry['max_results'] = str(max_results)
+
+ return qry
+
+def _maybe_add_arg(query, field, data):
+ if data is not None:
+ if isinstance(data, basestring):
+ data = [data]
+ data = ','.join(['ga:%s' % x for x in data])
+ query[field] = data
+
+def _get_match(obj_store, name, id, **kwargs):
+ key, val = None, None
+ if len(kwargs) > 0:
+ key = kwargs.keys()[0]
+ val = kwargs.values()[0]
+
+ if name is None and id is None and key is None:
+ return obj_store.get('items')[0]
+
+ name_ok = lambda item: name is not None and item.get('name') == name
+ id_ok = lambda item: id is not None and item.get('id') == id
+ key_ok = lambda item: key is not None and item.get(key) == val
+
+ match = None
+ if obj_store.get('items'):
+ # TODO look up gapi for faster lookup
+ for item in obj_store.get('items'):
+ if name_ok(item) or id_ok(item) or key_ok(item):
+ return item
+
+def _clean_index(index_dims, parse_dates):
+ _should_add = lambda lst: pd.Index(lst).isin(index_dims).all()
+ to_remove = []
+ to_add = []
+
+ if isinstance(parse_dates, (list, tuple, np.ndarray)):
+ for lst in parse_dates:
+ if isinstance(lst, (list, tuple, np.ndarray)):
+ if _should_add(lst):
+ to_add.append('_'.join(lst))
+ to_remove.extend(lst)
+ elif isinstance(parse_dates, dict):
+ for name, lst in parse_dates.iteritems():
+ if isinstance(lst, (list, tuple, np.ndarray)):
+ if _should_add(lst):
+ to_add.append(name)
+ to_remove.extend(lst)
+
+ index_dims = pd.Index(index_dims)
+ to_remove = pd.Index(set(to_remove))
+ to_add = pd.Index(set(to_add))
+
+ return index_dims - to_remove + to_add
+
+
+def _get_col_names(header_info):
+ return [x['name'][3:] for x in header_info]
+
+def _get_column_types(header_info):
+ return [(x['name'][3:], x['columnType']) for x in header_info]
+
+def _get_dim_names(header_info):
+ return [x['name'][3:] for x in header_info
+ if x['columnType'] == u'DIMENSION']
+
+def _get_met_names(header_info):
+ return [x['name'][3:] for x in header_info
+ if x['columnType'] == u'METRIC']
+
+def _get_data_types(header_info):
+ return [(x['name'][3:], TYPE_MAP.get(x['dataType'], object))
+ for x in header_info]
diff --git a/pandas/io/tests/test_ga.py b/pandas/io/tests/test_ga.py
new file mode 100644
index 0000000000000..09d325fa6eef1
--- /dev/null
+++ b/pandas/io/tests/test_ga.py
@@ -0,0 +1,115 @@
+import unittest
+import nose
+import httplib2
+from datetime import datetime
+
+import pandas as pd
+import pandas.core.common as com
+from pandas import DataFrame
+from pandas.util.testing import network, assert_frame_equal
+from numpy.testing.decorators import slow
+
+class TestGoogle(unittest.TestCase):
+
+ _multiprocess_can_split_ = True
+
+ @slow
+ @network
+ def test_getdata(self):
+ try:
+ from pandas.io.ga import GAnalytics, read_ga
+ from pandas.io.auth import AuthenticationConfigError
+ except ImportError:
+ raise nose.SkipTest
+
+ try:
+ end_date = datetime.now()
+ start_date = end_date - pd.offsets.Day() * 5
+ end_date = end_date.strftime('%Y-%m-%d')
+ start_date = start_date.strftime('%Y-%m-%d')
+
+ reader = GAnalytics()
+ df = reader.get_data(
+ metrics=['avgTimeOnSite', 'visitors', 'newVisits',
+ 'pageviewsPerVisit'],
+ start_date = start_date,
+ end_date = end_date,
+ dimensions=['date', 'hour'],
+ parse_dates={'ts' : ['date', 'hour']})
+
+ assert isinstance(df, DataFrame)
+ assert isinstance(df.index, pd.DatetimeIndex)
+ assert len(df) > 1
+ assert 'date' not in df
+ assert 'hour' not in df
+ assert df.index.name == 'ts'
+ assert 'avgTimeOnSite' in df
+ assert 'visitors' in df
+ assert 'newVisits' in df
+ assert 'pageviewsPerVisit' in df
+
+ df2 = read_ga(
+ metrics=['avgTimeOnSite', 'visitors', 'newVisits',
+ 'pageviewsPerVisit'],
+ start_date=start_date,
+ end_date=end_date,
+ dimensions=['date', 'hour'],
+ parse_dates={'ts' : ['date', 'hour']})
+
+ assert_frame_equal(df, df2)
+
+ except AuthenticationConfigError:
+ raise nose.SkipTest
+ except httplib2.ServerNotFoundError:
+ try:
+ h = httplib2.Http()
+ response, content = h.request("http://www.google.com")
+ raise
+ except httplib2.ServerNotFoundError:
+ raise nose.SkipTest
+
+ @slow
+ @network
+ def test_iterator(self):
+ try:
+ from pandas.io.ga import GAnalytics, read_ga
+ from pandas.io.auth import AuthenticationConfigError
+ except ImportError:
+ raise nose.SkipTest
+
+ try:
+ reader = GAnalytics()
+
+ it = reader.get_data(
+ metrics='visitors',
+ start_date='2005-1-1',
+ dimensions='date',
+ max_results=10, chunksize=5)
+
+ df1 = it.next()
+ df2 = it.next()
+
+ for df in [df1, df2]:
+ assert isinstance(df, DataFrame)
+ assert isinstance(df.index, pd.DatetimeIndex)
+ assert len(df) == 5
+ assert 'date' not in df
+ assert df.index.name == 'date'
+ assert 'visitors' in df
+
+ assert (df2.index > df1.index).all()
+
+ except AuthenticationConfigError:
+ raise nose.SkipTest
+ except httplib2.ServerNotFoundError:
+ try:
+ h = httplib2.Http()
+ response, content = h.request("http://www.google.com")
+ raise
+ except httplib2.ServerNotFoundError:
+ raise nose.SkipTest
+
+if __name__ == '__main__':
+ import nose
+ nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ exit=False)
| skips tests if no network or authentication config file is missing
@wesm, mainly just want a quick review of the testing suite to make sure it skips the test in the right situations.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2283 | 2012-11-18T20:59:41Z | 2012-12-01T15:25:45Z | 2012-12-01T15:25:45Z | 2014-06-14T13:14:30Z |
Easy vbench testing of HEAD against a known-good baseline | diff --git a/test_perf.sh b/test_perf.sh
new file mode 100755
index 0000000000000..5880769dae177
--- /dev/null
+++ b/test_perf.sh
@@ -0,0 +1,12 @@
+#!/bin/sh
+
+CURDIR=$(pwd)
+BASEDIR=$(readlink -f $(dirname $0 ))
+
+echo "Use vbench to compare HEAD against a known-good baseline."
+echo "Make sure the python 'vbench' library is installed..\n"
+
+cd "$BASEDIR/vb_suite/"
+python test_perf.py
+
+cd "$CURDIR"
diff --git a/vb_suite/binary_ops.py b/vb_suite/binary_ops.py
index c66f43f526ba1..b28d1d9ee0806 100644
--- a/vb_suite/binary_ops.py
+++ b/vb_suite/binary_ops.py
@@ -3,24 +3,3 @@
common_setup = """from pandas_vb_common import *
"""
-
-#----------------------------------------------------------------------
-# data alignment
-
-setup = common_setup + """n = 1000000
-# indices = Index([rands(10) for _ in xrange(n)])
-def sample(values, k):
- sampler = np.random.permutation(len(values))
- return values.take(sampler[:k])
-sz = 500000
-rng = np.arange(0, 10000000000000, 10000000)
-stamps = np.datetime64(datetime.now()).view('i8') + rng
-idx1 = np.sort(sample(stamps, sz))
-idx2 = np.sort(sample(stamps, sz))
-ts1 = Series(np.random.randn(sz), idx1)
-ts2 = Series(np.random.randn(sz), idx2)
-"""
-stmt = "ts1 + ts2"
-series_align_int64_index_binop = Benchmark(stmt, setup,
- start_date=datetime(2010, 6, 1),
- logy=True)
diff --git a/vb_suite/test_perf.py b/vb_suite/test_perf.py
new file mode 100755
index 0000000000000..1d9ee694204d6
--- /dev/null
+++ b/vb_suite/test_perf.py
@@ -0,0 +1,133 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+"""
+What
+----
+vbench is a library which can be used to benchmark the performance
+of a codebase over time.
+Although vbench can collect data over many commites, generate plots
+and other niceties, for Pull-Requests the important thing is the
+performance of the HEAD commit against a known-good baseline.
+
+This script tries to automate the process of comparing these
+two commits, and is meant to run out of the box on a fresh
+clone.
+
+How
+---
+These are the steps taken:
+1) create a temp directory into which vbench will clone the temporary repo.
+2) parse the Git tree to obtain metadata, and determine the HEAD.
+3) instantiate a vbench runner, using the local repo as the source repo.
+4) If results for the BASELINE_COMMIT aren't already in the db, have vbench
+do a run for it and store the results.
+5) perform a vbench run for HEAD and store the results.
+6) pull the results for both commits from the db. use pandas to align
+everything and calculate a ration for the timing information.
+7) print the results to the log file and to stdout.
+
+Known Issues: vbench fails to locate a baseline if HEAD is not a descendent
+"""
+import sys
+import shutil
+
+from pandas import *
+from vbench.api import BenchmarkRunner
+from vbench.db import BenchmarkDB
+from vbench.git import GitRepo
+import tempfile
+
+from suite import *
+
+BASELINE_COMMIT = 'bdbca8e' # v0.9,1 + regression fix
+LOG_FILE = os.path.abspath(os.path.join(REPO_PATH, 'vb_suite.log'))
+
+def get_results_df(db,rev):
+ """Takes a git commit hash and returns a Dataframe of benchmark results
+ """
+ bench = DataFrame(db.get_benchmarks())
+ results = DataFrame(db.get_rev_results(rev).values())
+
+ # Sinch vbench.db._reg_rev_results returns an unlabeled dict,
+ # we have to break encapsulation a bit.
+ results.columns = db._results.c.keys()
+ results = results.join(bench['name'], on='checksum').set_index("checksum")
+ return results
+
+def prprint(s):
+ print("*** %s"%s)
+
+def main():
+ TMP_DIR = tempfile.mkdtemp()
+ prprint("TMP_DIR = %s" % TMP_DIR)
+ prprint("LOG_FILE = %s\n" % LOG_FILE)
+
+ try:
+ logfile = open(LOG_FILE, 'w')
+
+ prprint( "Processing Repo at '%s'..." % REPO_PATH)
+ repo = GitRepo(REPO_PATH)
+
+ # get hashes of baseline and current head
+ h_head = repo.shas[-1]
+ h_baseline = BASELINE_COMMIT
+
+ prprint( "Opening DB at '%s'...\n" % DB_PATH)
+ db = BenchmarkDB(DB_PATH)
+
+ prprint( 'Comparing Head [%s] : %s ' % (h_head, repo.messages.get(h_head,"")))
+ prprint( 'Against baseline [%s] : %s \n' % (h_baseline,
+ repo.messages.get(h_baseline,"")))
+
+ prprint("Initializing Runner...")
+ runner = BenchmarkRunner(benchmarks, REPO_PATH, REPO_PATH, BUILD, DB_PATH,
+ TMP_DIR, PREPARE, always_clean=True,
+ # run_option='eod', start_date=START_DATE,
+ module_dependencies=dependencies)
+
+ prprint ("removing any previous measurements for the commits." )
+ db.delete_rev_results(h_baseline)
+ db.delete_rev_results(h_head)
+
+ # TODO: we could skip this, but we need to make sure all
+ # results are in the DB, which is a little tricky with
+ # start dates and so on.
+ prprint( "Running benchmarks for baseline commit '%s'" % h_baseline)
+ runner._run_and_write_results(h_baseline)
+
+ prprint ("Running benchmarks for current HEAD '%s'" % h_head)
+ runner._run_and_write_results(h_head)
+
+ prprint( 'Processing results...')
+
+ head_res = get_results_df(db,h_head)
+ baseline_res = get_results_df(db,h_baseline)
+ ratio = head_res['timing']/baseline_res['timing']
+ totals = DataFrame(dict(t_head=head_res['timing'],
+ t_baseline=baseline_res['timing'],
+ ratio=ratio,
+ name=baseline_res.name),columns=["t_head","t_baseline","ratio","name"])
+ totals = totals.ix[totals.t_head > 1.0] # ignore sub 1ms
+ totals = totals.dropna().sort("ratio").set_index('name') # sort in ascending order
+
+ s = "\n\nResults:\n" + totals.to_string(float_format=lambda x: "%0.2f" %x) + "\n\n"
+ s += "Columns: test_name | head_time [ms] | baseline_time [ms] | ratio\n\n"
+ s += "- a Ratio of 1.30 means HEAD is 30% slower then the Baseline.\n\n"
+
+ s += 'Head [%s] : %s\n' % (h_head, repo.messages.get(h_head,""))
+ s += 'Baseline [%s] : %s\n\n' % (h_baseline,repo.messages.get(h_baseline,""))
+
+ logfile.write(s)
+ logfile.close()
+
+ prprint(s )
+ prprint("Results were also written to the logfile at '%s'\n" % LOG_FILE)
+
+ finally:
+ # print("Disposing of TMP_DIR: %s" % TMP_DIR)
+ shutil.rmtree(TMP_DIR)
+ logfile.close()
+
+if __name__ == '__main__':
+ main()
| I hope to avoid more performance regressions in the future, in my own PR's
if no one else's...
Her'es some example output:
```
groupby_apply_dict_return 33.20 33.26 1.00
frame_reindex_axis1 2.32 2.32 1.00
frame_fillna_inplace 18.34 18.31 1.00
frame_ctor_nested_dict_int64 106.89 106.40 1.00
index_int64_union 94.21 93.68 1.01
write_csv_standard 309.77 307.62 1.01
groupby_multi_different_functions 16.72 16.60 1.01
frame_ctor_nested_dict 75.67 74.62 1.01
groupby_multi_python 54.00 53.18 1.02
frame_to_csv 356.20 347.55 1.02
groupby_multi_cython 21.21 20.63 1.03
groupby_indices 9.22 8.95 1.03
frame_ctor_list_of_dict 95.60 91.88 1.04
series_ctor_from_dict 2.84 2.65 1.07
sort_level_zero 6.46 6.00 1.08
groupby_simple_compress_timing 70.36 62.56 1.12
Columns: test_name | head_time [ms] | baseline_time [ms] | ratio
- a Ratio of 1.30 means HEAD is 30% slower then the Baseline.
Head [951d13b] : ENH: Add test_perf.sh, make vbench dead easy.
Baseline [bdbca8e] : BUG: fix borked data-copying iteritems performance affecting DataFrame and SparseDataFrame. close #2273
*** Results were also written to the logfile at '/home/user1/src/pandas/vb_suite.log'
```
benchmarks for which HEAD takes less then 1ms are dropped.
## Remaining issues
1) vb_suite/join_merge.py contains two tests named 'join_dataframe_index_single_key_bigger'
, easy enough to fix, but that might break existing dbs, i'm not sure, so I left it alone.
2) Runner seems to choke on refs which are not an ancestor of the current HEAD
this looks like a shortcoming in vbench.GitRepo.
example trace:
``` python
Traceback (most recent call last):
File "test_perf.py", line 134, in <module>
main()
File "test_perf.py", line 94, in main
runner._run_and_write_results(h_baseline)
File "/usr/local/lib/python2.7/dist-packages/vbench/runner.py", line 87, in _run_and_write_results
n_active_benchmarks, results = self._run_revision(rev)
File "/usr/local/lib/python2.7/dist-packages/vbench/runner.py", line 119, in _run_revision
need_to_run = self._get_benchmarks_for_rev(rev)
File "/usr/local/lib/python2.7/dist-packages/vbench/runner.py", line 173, in _get_benchmarks_for_rev
timestamp = self.repo.timestamps[rev]
File "/home/user1/src/pandas/pandas/core/series.py", line 470, in __getitem__
return self.index.get_value(self, key)
File "/home/user1/src/pandas/pandas/core/index.py", line 692, in get_value
raise e1
KeyError: 'bdbca8e'
```
3) redirecting stdout/stderr does no work, maybe because os.system is used
under the covers. It frustrated my efforts funnel the crud into a log file,
and keep the output readable.
4) it would be nice if vbench.db.BenchmarkDB.get_ref_result() returned a fully
labeled DataFrame, rather then just a partial dict (test names are not included)
Something like:
``` python
def get_rev_full_results(self, revision):
tab = self._results
bench = self._benchmarks
stmt = sql.select([tab.c.timestamp, tab.c.revision, tab.c.ncalls,
tab.c.timing, tab.c.traceback, tab.c.description,
tab.c.checksum,bench.c.name],
sql.and_(tab.c.revision == revision,tab.c.checksum == bench.c.checksum))
results = self.conn.execute(stmt)
df = _sqa_to_frame(results).set_index('name')
return df.sort_index()
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2279 | 2012-11-18T00:52:58Z | 2012-11-23T16:33:53Z | null | 2012-11-25T04:52:55Z |
ENH: add encoding/decoding error handling | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index b3a12fa8b0f3b..469804ecca67e 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -519,35 +519,37 @@ def str_get(arr, i):
return _na_map(f, arr)
-def str_decode(arr, encoding):
+def str_decode(arr, encoding, errors="strict"):
"""
Decode character string to unicode using indicated encoding
Parameters
----------
encoding : string
+ errors : string
Returns
-------
decoded : array
"""
- f = lambda x: x.decode(encoding)
+ f = lambda x: x.decode(encoding, errors)
return _na_map(f, arr)
-def str_encode(arr, encoding):
+def str_encode(arr, encoding, errors="strict"):
"""
- Encode character string to unicode using indicated encoding
+ Encode character string to some other encoding using indicated encoding
Parameters
----------
encoding : string
+ errors : string
Returns
-------
encoded : array
"""
- f = lambda x: x.encode(encoding)
+ f = lambda x: x.encode(encoding, errors)
return _na_map(f, arr)
@@ -675,13 +677,13 @@ def slice_replace(self, i=None, j=None):
raise NotImplementedError
@copy(str_decode)
- def decode(self, encoding):
- result = str_decode(self.series, encoding)
+ def decode(self, encoding, errors="strict"):
+ result = str_decode(self.series, encoding, errors)
return self._wrap_result(result)
@copy(str_encode)
- def encode(self, encoding):
- result = str_encode(self.series, encoding)
+ def encode(self, encoding, errors="strict"):
+ result = str_encode(self.series, encoding, errors)
return self._wrap_result(result)
count = _pat_wrapper(str_count, flags=True)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 8138976bc3c96..a3a4718395e64 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -690,7 +690,7 @@ def test_match_findall_flags(self):
self.assertEquals(result[0], True)
def test_encode_decode(self):
- base = Series([u'a', u'b', u'\xe4'])
+ base = Series([u'a', u'b', u'a\xe4'])
series = base.str.encode('utf-8')
f = lambda x: x.decode('utf-8')
@@ -699,6 +699,25 @@ def test_encode_decode(self):
tm.assert_series_equal(result, exp)
+ def test_encode_decode_errors(self):
+ encodeBase = Series([u'a', u'b', u'a\x9d'])
+ with self.assertRaises(UnicodeEncodeError):
+ encodeBase.str.encode('cp1252')
+
+ f = lambda x: x.encode('cp1252', 'ignore')
+ result = encodeBase.str.encode('cp1252', 'ignore')
+ exp = encodeBase.map(f)
+ tm.assert_series_equal(result, exp)
+
+ decodeBase = Series(['a', 'b', 'a\x9d'])
+ with self.assertRaises(UnicodeDecodeError):
+ decodeBase.str.encode('cp1252')
+ f = lambda x: x.decode('cp1252', 'ignore')
+ result = decodeBase.str.decode('cp1252', 'ignore')
+ exp = decodeBase.map(f)
+
+ tm.assert_series_equal(result, exp)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
| When encoding/decoding strings with errors allow you to pass error handling strings. This works the same as error handling for other encode/decode functions. Defaults to 'strict', but you can pass 'ignore', 'replace'.
Extends work done in #1706.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2276 | 2012-11-16T21:58:47Z | 2012-11-17T03:50:06Z | null | 2014-06-12T13:19:13Z |
Added support for specifying colors in parallel_plot() | diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 0724799ced6f2..708f8143de3d5 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -460,6 +460,10 @@ def test_parallel_coordinates(self):
path = os.path.join(curpath(), 'data/iris.csv')
df = read_csv(path)
_check_plot_works(parallel_coordinates, df, 'Name')
+ _check_plot_works(parallel_coordinates, df, 'Name',
+ colors=('#556270', '#4ECDC4', '#C7F464'))
+ _check_plot_works(parallel_coordinates, df, 'Name',
+ colors=['dodgerblue', 'aquamarine', 'seagreen'])
@slow
def test_radviz(self):
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 2e6faf5eb9362..c49d150aabd8a 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -411,7 +411,8 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
return fig
-def parallel_coordinates(data, class_column, cols=None, ax=None, **kwds):
+def parallel_coordinates(data, class_column, cols=None, ax=None, colors=None,
+ **kwds):
"""Parallel coordinates plotting.
Parameters:
@@ -420,6 +421,7 @@ def parallel_coordinates(data, class_column, cols=None, ax=None, **kwds):
class_column: Column name containing class names
cols: A list of column names to use, optional
ax: matplotlib axis object, optional
+ colors: A list or tuple of colors to use for the different classes, optional
kwds: A list of keywords for matplotlib plot method
Returns:
@@ -449,6 +451,14 @@ def random_color(column):
if ax == None:
ax = plt.gca()
+ # if user has not specified colors to use, choose at random
+ if colors is None:
+ colors = dict((kls, random_color(kls)) for kls in classes)
+ else:
+ if len(colors) != len(classes):
+ raise ValueError('Number of colors must match number of classes')
+ colors = dict((kls, colors[i]) for i, kls in enumerate(classes))
+
for i in range(n):
row = df.irow(i).values
y = row
@@ -456,10 +466,10 @@ def random_color(column):
if com.pprint_thing(kls) not in used_legends:
label = com.pprint_thing(kls)
used_legends.add(label)
- ax.plot(x, y, color=random_color(kls),
+ ax.plot(x, y, color=colors[kls],
label=label, **kwds)
else:
- ax.plot(x, y, color=random_color(kls), **kwds)
+ ax.plot(x, y, color=colors[kls], **kwds)
for i in range(ncols):
ax.axvline(i, linewidth=1, color='black')
| Sometimes the random colors chosen in parallel_plot are very unfortunate, so I thought it would be nice to provide an option to pass a list of colors in.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2274 | 2012-11-16T20:52:32Z | 2012-11-19T00:11:39Z | null | 2012-11-19T00:27:59Z |
ENH: pprint_thing escapes tabs. close #2038 | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 936f1d6357e4e..6c0cda736fd2c 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1178,6 +1178,8 @@ def pprint_thing(thing, _nest_lvl=0):
# either utf-8 or we replace errors
result = str(thing).decode('utf-8', "replace")
+ result=result.replace("\t",r'\t') # escape tabs
+
return unicode(result) # always unicode
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 50d0403efaa0d..44c40b6930784 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -211,6 +211,11 @@ def test_pprint_thing():
assert(pp_t(('foo',u'\u05d0',(u'\u05d0',u'\u05d0')))==
u'(foo, \u05d0, (\u05d0, \u05d0))')
+
+ # escape embedded tabs in string
+ # GH #2038
+ assert not "\t" in pp_t("a\tb")
+
class TestTake(unittest.TestCase):
def test_1d_with_out(self):
| another win for DRY.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2271 | 2012-11-16T17:20:54Z | 2012-11-17T03:53:16Z | null | 2012-11-17T03:53:16Z |
BUG: loses nano precision when converting Timestamp objects #2252 | diff --git a/pandas/src/datetime.pyx b/pandas/src/datetime.pyx
index bdfa04eaca441..846c3f4d21380 100644
--- a/pandas/src/datetime.pyx
+++ b/pandas/src/datetime.pyx
@@ -605,6 +605,9 @@ cpdef convert_to_tsobject(object ts, object tz=None):
if obj.tzinfo is not None and not _is_utc(obj.tzinfo):
offset = _get_utcoffset(obj.tzinfo, ts)
obj.value -= _delta_to_nanoseconds(offset)
+
+ if is_timestamp(ts):
+ obj.value += ts.nanosecond
_check_dts_bounds(obj.value, &obj.dts)
return obj
elif PyDate_Check(ts):
@@ -810,6 +813,8 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
'utc=True')
else:
iresult[i] = _pydatetime_to_dts(val, &dts)
+ if is_timestamp(val):
+ iresult[i] += val.nanosecond
_check_dts_bounds(iresult[i], &dts)
elif PyDate_Check(val):
iresult[i] = _date_to_datetime64(val, &dts)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index daaa86f681ee1..86feb68052f67 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1333,6 +1333,14 @@ def test_to_period_nofreq(self):
freq='infer')
idx.to_period()
+ def test_000constructor_resolution(self):
+ #2252
+ t1 = Timestamp((1352934390*1000000000)+1000000+1000+1)
+ idx = DatetimeIndex([t1])
+
+ self.assert_(idx.nanosecond[0] == t1.nanosecond)
+
+
def test_constructor_coverage(self):
rng = date_range('1/1/2000', periods=10.5)
exp = date_range('1/1/2000', periods=10)
| quick and dirty: add nanoseconds after conversion to datetimestruct #2252
| https://api.github.com/repos/pandas-dev/pandas/pulls/2265 | 2012-11-15T21:19:02Z | 2012-11-23T21:11:39Z | null | 2014-06-22T14:35:35Z |
Pytables selection enhancement & docs update for HDF5 tables | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index a77e2c928abfa..6eb141930f274 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -231,22 +231,49 @@ Note, with the :ref:`advanced indexing <indexing.advanced>` ``ix`` method, you
may select along more than one axis using boolean vectors combined with other
indexing expressions.
-Indexing a DataFrame with a boolean DataFrame
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Where and Masking
+~~~~~~~~~~~~~~~~~
-You may wish to set values on a DataFrame based on some boolean criteria
-derived from itself or another DataFrame or set of DataFrames. This can be done
-intuitively like so:
+Selecting values from a DataFrame is accomplished in a similar manner to a Series.
+You index the Frame with a boolean DataFrame of the same size. This is accomplished
+via the method `where` under the hood. The returned view of the DataFrame is the
+same size as the original.
+
+.. ipython:: python
+
+ df < 0
+ df[df < 0]
+
+In addition, `where` takes an optional `other` argument for replacement in the
+returned copy.
+
+.. ipython:: python
+
+ df.where(df < 0, -df)
+
+You may wish to set values on a DataFrame based on some boolean criteria.
+This can be done intuitively like so:
.. ipython:: python
df2 = df.copy()
- df2 < 0
df2[df2 < 0] = 0
df2
-Note that such an operation requires that the boolean DataFrame is indexed
-exactly the same.
+Furthermore, `where` aligns the input boolean condition (ndarray or DataFrame), such that partial selection
+with setting is possible. This is analagous to partial setting via `.ix` (but on the contents rather than the axis labels)
+
+.. ipython:: python
+
+ df2 = df.copy()
+ df2[ df2[1:4] > 0 ] = 3
+ df2
+
+`DataFrame.mask` is the inverse boolean operation of `where`.
+
+.. ipython:: python
+
+ df.mask(df >= 0)
Take Methods
diff --git a/doc/source/io.rst b/doc/source/io.rst
index f74120ad7ef57..76bd123acf8aa 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1,3 +1,4 @@
+
.. _io:
.. currentmodule:: pandas
@@ -812,8 +813,114 @@ In a current or later Python session, you can retrieve stored objects:
os.remove('store.h5')
-.. Storing in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~
+Storing in Table format
+~~~~~~~~~~~~~~~~~~~~~~~
+
+```HDFStore``` supports another *PyTables* format on disk, the *table* format. Conceptually a *table* is shaped
+very much like a DataFrame, with rows and columns. A *table* may be appended to in the same or other sessions.
+In addition, delete, query type operations are supported. You can create an index with ```create_table_index```
+after data is already in the table (this may become automatic in the future or an option on appending/putting a *table*).
+
+.. ipython:: python
+ :suppress:
+ :okexcept:
+
+ os.remove('store.h5')
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ df1 = df[0:4]
+ df2 = df[4:]
+ store.append('df', df1)
+ store.append('df', df2)
+
+ store.select('df')
+
+ store.create_table_index('df')
+ store.handle.root.df.table
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
+
+
+Querying objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`select` and `delete` operations have an optional criteria that can be specified to select/delete only
+a subset of the data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
+
+A query is specified using the `Term` class under the hood.
+
+ - 'index' refers to the index of a DataFrame
+ - 'major_axis' and 'minor_axis' are supported indexers of the Panel
+
+The following are all valid terms.
+
+.. code-block:: python
+
+ dict(field = 'index', op = '>', value = '20121114')
+ ('index', '>', '20121114')
+ 'index>20121114'
+ ('index', '>', datetime(2012,11,14))
+
+ ('index', ['20121114','20121115'])
+ ('major', Timestamp('2012/11/14'))
+ ('minor_axis', ['A','B'])
+
+Queries are built up (currently only *and* is supported) using a list. An example query for a panel might be specified as follows:
+
+.. code-block:: python
+
+ ['major_axis>20121114', ('minor_axis', ['A','B']) ]
+
+This is roughly translated to: major_axis must be greater than the date 20121114 and the minor_axis must be A or B
+
+.. ipython:: python
+
+ store = HDFStore('store.h5')
+ store.append('wp',wp)
+ store.select('wp',[ 'major_axis>20000102', ('minor_axis', ['A','B']) ])
+
+Delete objects stored in Table format
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. ipython:: python
+
+ store.remove('wp', 'index>20000102' )
+ store.select('wp')
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ import os
+ os.remove('store.h5')
-.. Querying objects stored in Table format
-.. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Notes & Caveats
+~~~~~~~~~~~~~~~
+
+ - Selection by items (the top level panel dimension) is not possible; you always get all of the items in the returned Panel
+ - Currently the sizes of the *column* items are governed by the first table creation
+ (this should be specified at creation time or use the largest available) - otherwise subsequent appends can truncate the column names
+ - Mixed-Type Panels/DataFrames are not currently supported - coming soon!
+ - Once a *table* is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can be appended
+ - Appending to an already existing table will raise an exception if any of the indexers (index,major_axis or minor_axis) are strings
+ and they would be truncated because the column size is too small (you can pass ```min_itemsize``` to append to provide a larger fixed size
+ to compensate)
+
+Performance
+~~~~~~~~~~~
+
+ - To delete a lot of data, it is sometimes better to erase the table and rewrite it (after say an indexing operation)
+ *PyTables* tends to increase the file size with deletions
+ - In general it is best to store Panels with the most frequently selected dimension in the minor axis and a time/date like dimension in the major axis
+ but this is not required, major_axis and minor_axis can be any valid Panel index
+ - No dimensions are currently indexed automagically (in the *PyTables* sense); these require an explict call to ```create_table_index```
+ - *Tables* offer better performance when compressed after writing them (as opposed to turning on compression at the very beginning)
+ use the pytables utilities ptrepack to rewrite the file (and also can change compression methods)
+ - Duplicate rows can be written, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index af480b5a6457f..bc8967973808e 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -7,6 +7,7 @@
from datetime import datetime, date
import time
+import re
import numpy as np
from pandas import (
@@ -67,12 +68,20 @@
# oh the troubles to reduce import time
_table_mod = None
+_table_supports_index = True
def _tables():
global _table_mod
+ global _table_supports_index
if _table_mod is None:
import tables
_table_mod = tables
+
+ # version requirements
+ major, minor, subv = tables.__version__.split('.')
+ if major >= 2 and minor >= 3:
+ _table_supports_index = True
+
return _table_mod
@@ -321,7 +330,7 @@ def select(self, key, where=None):
return self._read_group(group, where)
def put(self, key, value, table=False, append=False,
- compression=None):
+ compression=None, **kwargs):
"""
Store object in HDFStore
@@ -342,7 +351,7 @@ def put(self, key, value, table=False, append=False,
be used.
"""
self._write_to_group(key, value, table=table, append=append,
- comp=compression)
+ comp=compression, **kwargs)
def _get_handler(self, op, kind):
return getattr(self, '_%s_%s' % (op, kind))
@@ -370,7 +379,7 @@ def remove(self, key, where=None):
if group is not None:
self._delete_from_table(group, where)
- def append(self, key, value):
+ def append(self, key, value, **kwargs):
"""
Append to Table in file. Node must already exist and be Table
format.
@@ -385,10 +394,58 @@ def append(self, key, value):
Does *not* check if data being appended overlaps with existing
data in the table, so be careful
"""
- self._write_to_group(key, value, table=True, append=True)
+ self._write_to_group(key, value, table=True, append=True, **kwargs)
+
+ def create_table_index(self, key, columns = None, optlevel = None, kind = None):
+ """
+ Create a pytables index on the specified columns
+ note: cannot index Time64Col() currently; PyTables must be >= 2.3.1
+
+
+ Paramaters
+ ----------
+ key : object (the node to index)
+ columns : None or list_like (the columns to index - currently supports index/column)
+ optlevel: optimization level (defaults to 6)
+ kind : kind of index (defaults to 'medium')
+
+ Exceptions
+ ----------
+ raises if the node is not a table
+
+ """
+
+ # version requirements
+ if not _table_supports_index:
+ raise("PyTables >= 2.3 is required for table indexing")
+
+ group = getattr(self.handle.root, key, None)
+ if group is None: return
+
+ if not _is_table_type(group):
+ raise Exception("cannot create table index on a non-table")
+
+ table = getattr(group, 'table', None)
+ if table is None: return
+
+ if columns is None:
+ columns = ['index']
+ if not isinstance(columns, (tuple,list)):
+ columns = [ columns ]
+
+ kw = dict()
+ if optlevel is not None:
+ kw['optlevel'] = optlevel
+ if kind is not None:
+ kw['kind'] = kind
+
+ for c in columns:
+ v = getattr(table.cols,c,None)
+ if v is not None and not v.is_indexed:
+ v.createIndex(**kw)
def _write_to_group(self, key, value, table=False, append=False,
- comp=None):
+ comp=None, **kwargs):
root = self.handle.root
if key not in root._v_children:
group = self.handle.createGroup(root, key)
@@ -400,7 +457,7 @@ def _write_to_group(self, key, value, table=False, append=False,
kind = '%s_table' % kind
handler = self._get_handler(op='write', kind=kind)
wrapper = lambda value: handler(group, value, append=append,
- comp=comp)
+ comp=comp, **kwargs)
else:
if append:
raise ValueError('Can only append to Tables')
@@ -530,7 +587,7 @@ def _read_block_manager(self, group):
return BlockManager(blocks, axes)
- def _write_frame_table(self, group, df, append=False, comp=None):
+ def _write_frame_table(self, group, df, append=False, comp=None, **kwargs):
mat = df.values
values = mat.reshape((1,) + mat.shape)
@@ -540,7 +597,7 @@ def _write_frame_table(self, group, df, append=False, comp=None):
self._write_table(group, items=['value'],
index=df.index, columns=df.columns,
- values=values, append=append, compression=comp)
+ values=values, append=append, compression=comp, **kwargs)
def _write_wide(self, group, panel):
panel._consolidate_inplace()
@@ -549,10 +606,10 @@ def _write_wide(self, group, panel):
def _read_wide(self, group, where=None):
return Panel(self._read_block_manager(group))
- def _write_wide_table(self, group, panel, append=False, comp=None):
+ def _write_wide_table(self, group, panel, append=False, comp=None, **kwargs):
self._write_table(group, items=panel.items, index=panel.major_axis,
columns=panel.minor_axis, values=panel.values,
- append=append, compression=comp)
+ append=append, compression=comp, **kwargs)
def _read_wide_table(self, group, where=None):
return self._read_panel_table(group, where)
@@ -569,10 +626,10 @@ def _write_index(self, group, key, index):
self._write_sparse_intindex(group, key, index)
else:
setattr(group._v_attrs, '%s_variety' % key, 'regular')
- converted, kind, _ = _convert_index(index)
- self._write_array(group, key, converted)
+ converted = _convert_index(index).set_name('index')
+ self._write_array(group, key, converted.values)
node = getattr(group, key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = converted.kind
node._v_attrs.name = index.name
if isinstance(index, (DatetimeIndex, PeriodIndex)):
@@ -629,11 +686,11 @@ def _write_multi_index(self, group, key, index):
index.labels,
index.names)):
# write the level
- conv_level, kind, _ = _convert_index(lev)
level_key = '%s_level%d' % (key, i)
- self._write_array(group, level_key, conv_level)
+ conv_level = _convert_index(lev).set_name(level_key)
+ self._write_array(group, level_key, conv_level.values)
node = getattr(group, level_key)
- node._v_attrs.kind = kind
+ node._v_attrs.kind = conv_level.kind
node._v_attrs.name = name
# write the name
@@ -738,22 +795,28 @@ def _write_array(self, group, key, value):
getattr(group, key)._v_attrs.transposed = transposed
def _write_table(self, group, items=None, index=None, columns=None,
- values=None, append=False, compression=None):
+ values=None, append=False, compression=None,
+ min_itemsize = None, **kwargs):
""" need to check for conform to the existing table:
e.g. columns should match """
- # create dict of types
- index_converted, index_kind, index_t = _convert_index(index)
- columns_converted, cols_kind, col_t = _convert_index(columns)
+
+ # create Col types
+ index_converted = _convert_index(index).set_name('index')
+ columns_converted = _convert_index(columns).set_name('column')
# create the table if it doesn't exist (or get it if it does)
if not append:
if 'table' in group:
self.handle.removeNode(group, 'table')
+ else:
+ # check that we are not truncating on our indicies
+ index_converted.maybe_set(min_itemsize = min_itemsize)
+ columns_converted.maybe_set(min_itemsize = min_itemsize)
if 'table' not in group:
# create the table
- desc = {'index': index_t,
- 'column': col_t,
+ desc = {'index' : index_converted.typ,
+ 'column': columns_converted.typ,
'values': _tables().FloatCol(shape=(len(values)))}
options = {'name': 'table',
@@ -775,16 +838,20 @@ def _write_table(self, group, items=None, index=None, columns=None,
# the table must already exist
table = getattr(group, 'table', None)
+ # check that we are not truncating on our indicies
+ index_converted.validate(table)
+ columns_converted.validate(table)
+
# check for backwards incompatibility
if append:
- existing_kind = table._v_attrs.index_kind
- if existing_kind != index_kind:
+ existing_kind = getattr(table._v_attrs,'index_kind',None)
+ if existing_kind is not None and existing_kind != index_converted.kind:
raise TypeError("incompatible kind in index [%s - %s]" %
- (existing_kind, index_kind))
+ (existing_kind, index_converted.kind))
# add kinds
- table._v_attrs.index_kind = index_kind
- table._v_attrs.columns_kind = cols_kind
+ table._v_attrs.index_kind = index_converted.kind
+ table._v_attrs.columns_kind = columns_converted.kind
if append:
existing_fields = getattr(table._v_attrs, 'fields', None)
if (existing_fields is not None and
@@ -916,35 +983,90 @@ def _read_panel_table(self, group, where=None):
lp = DataFrame(new_values, index=new_index, columns=lp.columns)
wp = lp.to_panel()
- if sel.column_filter:
- new_minor = sorted(set(wp.minor_axis) & sel.column_filter)
+ if sel.filter:
+ new_minor = sorted(set(wp.minor_axis) & sel.filter)
wp = wp.reindex(minor=new_minor)
return wp
- def _delete_from_table(self, group, where = None):
+ def _delete_from_table(self, group, where):
+ """ delete rows from a group where condition is True """
table = getattr(group, 'table')
# create the selection
- s = Selection(table, where, table._v_attrs.index_kind)
+ s = Selection(table,where,table._v_attrs.index_kind)
s.select_coords()
# delete the rows in reverse order
- l = list(s.values)
- l.reverse()
- for c in l:
- table.removeRows(c)
- self.handle.flush()
- return len(s.values)
+ l = list(s.values)
+ ln = len(l)
+
+ if ln:
+
+ # if we can do a consecutive removal - do it!
+ if l[0]+ln-1 == l[-1]:
+ table.removeRows(start = l[0], stop = l[-1]+1)
+
+ # one by one
+ else:
+ l.reverse()
+ for c in l:
+ table.removeRows(c)
+
+ self.handle.flush()
+ # return the number of rows removed
+ return ln
+
+class Col(object):
+ """ a column description class
+
+ Parameters
+ ----------
+
+ values : the ndarray like converted values
+ kind : a string description of this type
+ typ : the pytables type
+
+ """
+
+ def __init__(self, values, kind, typ, itemsize = None, **kwargs):
+ self.values = values
+ self.kind = kind
+ self.typ = typ
+ self.itemsize = itemsize
+ self.name = None
+
+ def set_name(self, n):
+ self.name = n
+ return self
+
+ def __iter__(self):
+ return iter(self.values)
+
+ def maybe_set(self, min_itemsize = None, **kwargs):
+ """ maybe set a string col itemsize """
+ if self.kind == 'string' and min_itemsize is not None:
+ if self.typ.itemsize < min_itemsize:
+ self.typ = _tables().StringCol(itemsize = min_itemsize, pos = getattr(self.typ,'pos',None))
+
+ def validate(self, table, **kwargs):
+ """ validate this column for string truncation (or reset to the max size) """
+ if self.kind == 'string':
+
+ # the current column name
+ t = getattr(table.description,self.name,None)
+ if t is not None:
+ if t.itemsize < self.itemsize:
+ raise Exception("[%s] column has a min_itemsize of [%s] but itemsize [%s] is required!" % (self.name,self.itemsize,t.itemsize))
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif isinstance(index, (Int64Index, PeriodIndex)):
atom = _tables().Int64Col()
- return index.values, 'integer', atom
+ return Col(index.values, 'integer', atom)
if isinstance(index, MultiIndex):
raise Exception('MultiIndex not supported here!')
@@ -955,33 +1077,33 @@ def _convert_index(index):
if inferred_type == 'datetime64':
converted = values.view('i8')
- return converted, 'datetime64', _tables().Int64Col()
+ return Col(converted, 'datetime64', _tables().Int64Col())
elif inferred_type == 'datetime':
converted = np.array([(time.mktime(v.timetuple()) +
v.microsecond / 1E6) for v in values],
dtype=np.float64)
- return converted, 'datetime', _tables().Time64Col()
+ return Col(converted, 'datetime', _tables().Time64Col())
elif inferred_type == 'date':
converted = np.array([time.mktime(v.timetuple()) for v in values],
dtype=np.int32)
- return converted, 'date', _tables().Time32Col()
+ return Col(converted, 'date', _tables().Time32Col())
elif inferred_type == 'string':
converted = np.array(list(values), dtype=np.str_)
itemsize = converted.dtype.itemsize
- return converted, 'string', _tables().StringCol(itemsize)
+ return Col(converted, 'string', _tables().StringCol(itemsize), itemsize = itemsize)
elif inferred_type == 'unicode':
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
elif inferred_type == 'integer':
# take a guess for now, hope the values fit
atom = _tables().Int64Col()
- return np.asarray(values, dtype=np.int64), 'integer', atom
+ return Col(np.asarray(values, dtype=np.int64), 'integer', atom)
elif inferred_type == 'floating':
atom = _tables().Float64Col()
- return np.asarray(values, dtype=np.float64), 'float', atom
+ return Col(np.asarray(values, dtype=np.float64), 'float', atom)
else: # pragma: no cover
atom = _tables().ObjectAtom()
- return np.asarray(values, dtype='O'), 'object', atom
+ return Col(np.asarray(values, dtype='O'), 'object', atom)
def _read_array(group, key):
@@ -1088,6 +1210,151 @@ def _alias_to_class(alias):
return _reverse_index_map.get(alias, Index)
+class Term(object):
+ """ create a term object that holds a field, op, and value
+
+ Parameters
+ ----------
+ field : dict, string term expression, or the field to operate (must be a valid index/column type of DataFrame/Panel)
+ op : a valid op (defaults to '=') (optional)
+ >, >=, <, <=, =, != (not equal) are allowed
+ value : a value or list of values (required)
+
+ Returns
+ -------
+ a Term object
+
+ Examples
+ --------
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('major>20121114')
+ Term('minor', ['A','B'])
+
+ """
+
+ _ops = ['<','<=','>','>=','=','!=']
+ _search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
+ _index = ['index','major_axis','major']
+ _column = ['column','minor_axis','minor']
+
+ def __init__(self, field, op = None, value = None, index_kind = None):
+ self.field = None
+ self.op = None
+ self.value = None
+ self.index_kind = index_kind
+ self.filter = None
+ self.condition = None
+
+ # unpack lists/tuples in field
+ if isinstance(field,(tuple,list)):
+ f = field
+ field = f[0]
+ if len(f) > 1:
+ op = f[1]
+ if len(f) > 2:
+ value = f[2]
+
+ # backwards compatible
+ if isinstance(field, dict):
+ self.field = field.get('field')
+ self.op = field.get('op') or '='
+ self.value = field.get('value')
+
+ # passed a term
+ elif isinstance(field,Term):
+ self.field = field.field
+ self.op = field.op
+ self.value = field.value
+
+ # a string expression (or just the field)
+ elif isinstance(field,basestring):
+
+ # is a term is passed
+ s = self._search.match(field)
+ if s is not None:
+ self.field = s.group('field')
+ self.op = s.group('op')
+ self.value = s.group('value')
+
+ else:
+ self.field = field
+
+ # is an op passed?
+ if isinstance(op, basestring) and op in self._ops:
+ self.op = op
+ self.value = value
+ else:
+ self.op = '='
+ self.value = op
+
+ else:
+ raise Exception("Term does not understand the supplied field [%s]" % field)
+
+ # we have valid fields
+ if self.field is None or self.op is None or self.value is None:
+ raise Exception("Could not create this term [%s]" % str(self))
+
+ # valid field name
+ if self.field in self._index:
+ self.field = 'index'
+ elif self.field in self._column:
+ self.field = 'column'
+ else:
+ raise Exception("field is not a valid index/column for this term [%s]" % str(self))
+
+ # we have valid conditions
+ if self.op in ['>','>=','<','<=']:
+ if hasattr(self.value,'__iter__') and len(self.value) > 1:
+ raise Exception("an inequality condition cannot have multiple values [%s]" % str(self))
+
+ if not hasattr(self.value,'__iter__'):
+ self.value = [ self.value ]
+
+ self.eval()
+
+ def __str__(self):
+ return "field->%s,op->%s,value->%s" % (self.field,self.op,self.value)
+
+ __repr__ = __str__
+
+ def eval(self):
+ """ set the numexpr expression for this term """
+
+ # convert values
+ values = [ self.convert_value(v) for v in self.value ]
+
+ # equality conditions
+ if self.op in ['=','!=']:
+
+ # too many values to create the expression?
+ if len(values) <= 61:
+ self.condition = "(%s)" % ' | '.join([ "(%s == %s)" % (self.field,v[0]) for v in values])
+
+ # use a filter after reading
+ else:
+ self.filter = set([ v[1] for v in values ])
+
+ else:
+
+ self.condition = '(%s %s %s)' % (self.field, self.op, values[0][0])
+
+ def convert_value(self, v):
+
+ if self.field == 'index':
+ if self.index_kind == 'datetime64' :
+ return [lib.Timestamp(v).value, None]
+ elif isinstance(v, datetime):
+ return [time.mktime(v.timetuple()), None]
+ elif not isinstance(v, basestring):
+ return [str(v), None]
+
+ # string quoting
+ return ["'" + v + "'", v]
+
class Selection(object):
"""
Carries out a selection operation on a tables.Table object.
@@ -1095,72 +1362,43 @@ class Selection(object):
Parameters
----------
table : tables.Table
- where : list of dicts of the following form
-
- Comparison op
- {'field' : 'index',
- 'op' : '>=',
- 'value' : value}
+ where : list of Terms (or convertable to)
- Match single value
- {'field' : 'index',
- 'value' : v1}
-
- Match a set of values
- {'field' : 'index',
- 'value' : [v1, v2, v3]}
"""
def __init__(self, table, where=None, index_kind=None):
- self.table = table
- self.where = where
+ self.table = table
+ self.where = where
self.index_kind = index_kind
- self.column_filter = None
- self.the_condition = None
- self.conditions = []
- self.values = None
- if where:
- self.generate(where)
+ self.values = None
+ self.condition = None
+ self.filter = None
+ self.terms = self.generate(where)
+
+ # create the numexpr & the filter
+ if self.terms:
+ conds = [ t.condition for t in self.terms if t.condition is not None ]
+ if len(conds):
+ self.condition = "(%s)" % ' & '.join(conds)
+ self.filter = set()
+ for t in self.terms:
+ if t.filter is not None:
+ self.filter |= t.filter
def generate(self, where):
- # and condictions
- for c in where:
- op = c.get('op', None)
- value = c['value']
- field = c['field']
-
- if field == 'index' and self.index_kind == 'datetime64':
- val = lib.Timestamp(value).value
- self.conditions.append('(%s %s %s)' % (field, op, val))
- elif field == 'index' and isinstance(value, datetime):
- value = time.mktime(value.timetuple())
- self.conditions.append('(%s %s %s)' % (field, op, value))
- else:
- self.generate_multiple_conditions(op, value, field)
+ """ generate and return the terms """
+ if where is None: return None
- if len(self.conditions):
- self.the_condition = '(' + ' & '.join(self.conditions) + ')'
+ if not isinstance(where, (list,tuple)):
+ where = [ where ]
- def generate_multiple_conditions(self, op, value, field):
-
- if op and op == 'in' or isinstance(value, (list, np.ndarray)):
- if len(value) <= 61:
- l = '(' + ' | '.join([ "(%s == '%s')" % (field, v)
- for v in value]) + ')'
- self.conditions.append(l)
- else:
- self.column_filter = set(value)
- else:
- if op is None:
- op = '=='
- self.conditions.append('(%s %s "%s")' % (field, op, value))
+ return [ Term(c, index_kind = self.index_kind) for c in where ]
def select(self):
"""
generate the selection
"""
- if self.the_condition:
- self.values = self.table.readWhere(self.the_condition)
-
+ if self.condition is not None:
+ self.values = self.table.readWhere(self.condition)
else:
self.values = self.table.read()
@@ -1168,7 +1406,7 @@ def select_coords(self):
"""
generate the selection
"""
- self.values = self.table.getWhereList(self.the_condition)
+ self.values = self.table.getWhereList(self.condition)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 9442f274a7810..30bc9d4ed8ba1 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -10,10 +10,11 @@
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, Index)
-from pandas.io.pytables import HDFStore, get_store
+from pandas.io.pytables import HDFStore, get_store, Term
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
+from pandas import concat
try:
import tables
@@ -142,10 +143,48 @@ def test_put_integer(self):
def test_append(self):
df = tm.makeTimeDataFrame()
- self.store.put('c', df[:10], table=True)
+ self.store.append('c', df[:10])
self.store.append('c', df[10:])
tm.assert_frame_equal(self.store['c'], df)
+ self.store.put('d', df[:10], table=True)
+ self.store.append('d', df[10:])
+ tm.assert_frame_equal(self.store['d'], df)
+
+ def test_append_with_strings(self):
+ wp = tm.makePanel()
+ wp2 = wp.rename_axis(dict([ (x,"%s_extra" % x) for x in wp.minor_axis ]), axis = 2)
+
+ self.store.append('s1', wp, min_itemsize = 20)
+ self.store.append('s1', wp2)
+ expected = concat([ wp, wp2], axis = 2)
+ expected = expected.reindex(minor_axis = sorted(expected.minor_axis))
+ tm.assert_panel_equal(self.store['s1'], expected)
+
+ # test truncation of bigger strings
+ self.store.append('s2', wp)
+ self.assertRaises(Exception, self.store.append, 's2', wp2)
+
+ def test_create_table_index(self):
+ wp = tm.makePanel()
+ self.store.append('p5', wp)
+ self.store.create_table_index('p5')
+
+ assert(self.store.handle.root.p5.table.cols.index.is_indexed == True)
+ assert(self.store.handle.root.p5.table.cols.column.is_indexed == False)
+
+ df = tm.makeTimeDataFrame()
+ self.store.append('f', df[:10])
+ self.store.append('f', df[10:])
+ self.store.create_table_index('f')
+
+ # create twice
+ self.store.create_table_index('f')
+
+ # try to index a non-table
+ self.store.put('f2', df)
+ self.assertRaises(Exception, self.store.create_table_index, 'f2')
+
def test_append_diff_item_order(self):
wp = tm.makePanel()
wp1 = wp.ix[:, :10, :]
@@ -177,11 +216,7 @@ def test_remove(self):
self.assertEquals(len(self.store), 0)
def test_remove_where_not_exist(self):
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : 'foo'
- }
+ crit1 = Term('index','>','foo')
self.store.remove('a', where=[crit1])
def test_remove_crit(self):
@@ -189,21 +224,60 @@ def test_remove_crit(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = Term('index','>',date)
+ crit2 = Term('column',['A', 'D'])
self.store.remove('wp', where=[crit1])
self.store.remove('wp', where=[crit2])
result = self.store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
tm.assert_panel_equal(result, expected)
+ # test non-consecutive row removal
+ wp = tm.makePanel()
+ self.store.put('wp2', wp, table=True)
+
+ date1 = wp.major_axis[1:3]
+ date2 = wp.major_axis[5]
+ date3 = [wp.major_axis[7],wp.major_axis[9]]
+
+ crit1 = Term('index',date1)
+ crit2 = Term('index',date2)
+ crit3 = Term('index',date3)
+
+ self.store.remove('wp2', where=[crit1])
+ self.store.remove('wp2', where=[crit2])
+ self.store.remove('wp2', where=[crit3])
+ result = self.store['wp2']
+
+ ma = list(wp.major_axis)
+ for d in date1:
+ ma.remove(d)
+ ma.remove(date2)
+ for d in date3:
+ ma.remove(d)
+ expected = wp.reindex(major = ma)
+ tm.assert_panel_equal(result, expected)
+
+ def test_terms(self):
+
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('index>20121114')
+ Term('major>20121114')
+ Term('major_axis>20121114')
+ Term('minor', ['A','B'])
+ Term('minor_axis', ['A','B'])
+ Term('column', ['A','B'])
+
+ self.assertRaises(Exception, Term.__init__)
+ self.assertRaises(Exception, Term.__init__, 'blah')
+ self.assertRaises(Exception, Term.__init__, 'index')
+ self.assertRaises(Exception, Term.__init__, 'index', '==')
+ self.assertRaises(Exception, Term.__init__, 'index', '>', 5)
+
def test_series(self):
s = tm.makeStringSeries()
self._check_roundtrip(s, tm.assert_series_equal)
@@ -528,15 +602,8 @@ def test_panel_select(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column', ['A', 'D'])
result = self.store.select('wp', [crit1, crit2])
expected = wp.truncate(before=date).reindex(minor=['A', 'D'])
@@ -547,19 +614,9 @@ def test_frame_select(self):
self.store.put('frame', df, table=True)
date = df.index[len(df) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
- crit3 = {
- 'field' : 'column',
- 'value' : 'A'
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column',['A', 'D'])
+ crit3 = ('column','A')
result = self.store.select('frame', [crit1, crit2])
expected = df.ix[date:, ['A', 'D']]
@@ -580,10 +637,7 @@ def test_select_filter_corner(self):
df.columns = ['%.3d' % c for c in df.columns]
self.store.put('frame', df, table=True)
- crit = {
- 'field' : 'column',
- 'value' : df.columns[:75]
- }
+ crit = Term('column', df.columns[:75])
result = self.store.select('frame', [crit])
tm.assert_frame_equal(result, df.ix[:, df.columns[:75]])
| 1. added **str** (to do **repr**)
2. row removal in tables is much faster if rows are consecutive
3. added Term class, refactored Selection (this is backwards compatible)
Term is a concise way of specifying conditions for queries, e.g.
```
Term(dict(field = 'index', op = '>', value = '20121114'))
Term('index', '20121114')
Term('index', '>', '20121114')
Term('index', ['20121114','20121114'])
Term('index', datetime(2012,11,14))
Term('index>20121114')
```
updated tests for same
this should close GH #1996
4. added docs for HDF5 table in io.html
5. append on a table that didn't exist was failing (because of testing of the index_kind attribute first - which may not exist)
fixed & added test
6. added create_table_index method to create indicies on tables (which, btw now works quite well as Int64 indicies are used as opposed to the Time64Col which has a bug); includes a check on the pytables version requirement
this should close GH #698
7. added min_itemsize as a paremeter to append; allows bigger default indexer columns upon table creation (even if you don't append something that big - but might later, avoid the truncation issue)
8. incorporated 0.9.1 whatsnew docs for where & mask into Indexing Section of main docs
| https://api.github.com/repos/pandas-dev/pandas/pulls/2264 | 2012-11-15T20:16:53Z | 2012-11-24T18:09:04Z | null | 2014-06-12T20:20:07Z |
pytables selection enhancements (to close GH #1966) | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index af480b5a6457f..224ac82237dfd 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -7,6 +7,7 @@
from datetime import datetime, date
import time
+import re
import numpy as np
from pandas import (
@@ -916,27 +917,40 @@ def _read_panel_table(self, group, where=None):
lp = DataFrame(new_values, index=new_index, columns=lp.columns)
wp = lp.to_panel()
- if sel.column_filter:
- new_minor = sorted(set(wp.minor_axis) & sel.column_filter)
+ if sel.filter:
+ new_minor = sorted(set(wp.minor_axis) & sel.filter)
wp = wp.reindex(minor=new_minor)
return wp
- def _delete_from_table(self, group, where = None):
+ def _delete_from_table(self, group, where):
+ """ delete rows from a group where condition is True """
table = getattr(group, 'table')
# create the selection
- s = Selection(table, where, table._v_attrs.index_kind)
+ s = Selection(table,where,table._v_attrs.index_kind)
s.select_coords()
# delete the rows in reverse order
- l = list(s.values)
- l.reverse()
- for c in l:
- table.removeRows(c)
- self.handle.flush()
- return len(s.values)
+ l = list(s.values)
+ ln = len(l)
+
+ if ln:
+
+ # if we can do a consecutive removal - do it!
+ if l[0]+ln-1 == l[-1]:
+ table.removeRows(start = l[0], stop = l[-1]+1)
+ # one by one
+ else:
+ l.reverse()
+ for c in l:
+ table.removeRows(c)
+
+ self.handle.flush()
+
+ # return the number of rows removed
+ return ln
def _convert_index(index):
if isinstance(index, DatetimeIndex):
@@ -1088,6 +1102,151 @@ def _alias_to_class(alias):
return _reverse_index_map.get(alias, Index)
+class Term(object):
+ """ create a term object that holds a field, op, and value
+
+ Parameters
+ ----------
+ field : dict, string term expression, or the field to operate (must be a valid index/column type of DataFrame/Panel)
+ op : a valid op (defaults to '=') (optional)
+ >, >=, <, <=, =, != (not equal) are allowed
+ value : a value or list of values (required)
+
+ Returns
+ -------
+ a Term object
+
+ Examples
+ --------
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('index>20121114')
+
+ """
+
+ _ops = ['<','<=','>','>=','=','!=']
+ _search = re.compile("^(?P<field>\w+)(?P<op>%s)(?P<value>.+)$" % '|'.join(_ops))
+ _index = ['index','major_axis']
+ _column = ['column','minor_axis','items']
+
+ def __init__(self, field, op = None, value = None, index_kind = None):
+ self.field = None
+ self.op = None
+ self.value = None
+ self.typ = None
+ self.index_kind = index_kind
+ self.filter = None
+ self.condition = None
+
+ # unpack lists/tuples in field
+ if isinstance(field,(tuple,list)):
+ f = field
+ field = f[0]
+ if len(f) > 1:
+ op = f[1]
+ if len(f) > 2:
+ value = f[2]
+
+ # backwards compatible
+ if isinstance(field, dict):
+ self.field = field.get('field')
+ self.op = field.get('op') or '='
+ self.value = field.get('value')
+
+ # passed a term
+ elif isinstance(field,Term):
+ self.field = field.field
+ self.op = field.op
+ self.value = field.value
+
+ # a string expression (or just the field)
+ elif isinstance(field,basestring):
+
+ # is a term is passed
+ s = self._search.match(field)
+ if s is not None:
+ self.field = s.group('field')
+ self.op = s.group('op')
+ self.value = s.group('value')
+
+ else:
+ self.field = field
+
+ # is an op passed?
+ if isinstance(op, basestring) and op in self._ops:
+ self.op = op
+ self.value = value
+ else:
+ self.op = '='
+ self.value = op
+
+ else:
+ raise Exception("Term does not understand the supplied field [%s]" % field)
+
+ # we have valid fields
+ if self.field is None or self.op is None or self.value is None:
+ raise Exception("Could not create this term [%s]" % str(self))
+
+ # valid field name
+ if self.field in self._index:
+ self.typ = 'index'
+ elif self.field in self._column:
+ self.typ = 'column'
+ else:
+ raise Exception("field is not a valid index/column for this term [%s]" % str(self))
+
+ # we have valid conditions
+ if self.op in ['>','>=','<','<=']:
+ if hasattr(self.value,'__iter__') and len(self.value) > 1:
+ raise Exception("an inequality condition cannot have multiple values [%s]" % str(self))
+
+ if not hasattr(self.value,'__iter__'):
+ self.value = [ self.value ]
+
+ self.eval()
+
+ def __str__(self):
+ return "typ->%s,field->%s,op->%s,value->%s" % (self.typ,self.field,self.op,self.value)
+
+ __repr__ = __str__
+
+ def eval(self):
+ """ set the numexpr expression for this term """
+
+ # convert values
+ values = [ self.convert_value(v) for v in self.value ]
+
+ # equality conditions
+ if self.op in ['=','!=']:
+
+ # too many values to create the expression?
+ if len(values) <= 61:
+ self.condition = "(%s)" % ' | '.join([ "(%s == %s)" % (self.field,v[0]) for v in values])
+
+ # use a filter after reading
+ else:
+ self.filter = set([ v[1] for v in values ])
+
+ else:
+
+ self.condition = '(%s %s %s)' % (self.field, self.op, values[0][0])
+
+ def convert_value(self, v):
+
+ if self.typ == 'index':
+ if self.index_kind == 'datetime64' :
+ return [lib.Timestamp(v).value, None]
+ elif isinstance(v, datetime):
+ return [time.mktime(v.timetuple()), None]
+ elif not isinstance(v, basestring):
+ return [str(v), None]
+
+ # string quoting
+ return ["'" + v + "'", v]
+
class Selection(object):
"""
Carries out a selection operation on a tables.Table object.
@@ -1095,72 +1254,43 @@ class Selection(object):
Parameters
----------
table : tables.Table
- where : list of dicts of the following form
-
- Comparison op
- {'field' : 'index',
- 'op' : '>=',
- 'value' : value}
-
- Match single value
- {'field' : 'index',
- 'value' : v1}
+ where : list of Terms (or convertable to)
- Match a set of values
- {'field' : 'index',
- 'value' : [v1, v2, v3]}
"""
def __init__(self, table, where=None, index_kind=None):
- self.table = table
- self.where = where
+ self.table = table
+ self.where = where
self.index_kind = index_kind
- self.column_filter = None
- self.the_condition = None
- self.conditions = []
- self.values = None
- if where:
- self.generate(where)
+ self.values = None
+ self.condition = None
+ self.filter = None
+ self.terms = self.generate(where)
+
+ # create the numexpr & the filter
+ if self.terms:
+ conds = [ t.condition for t in self.terms if t.condition is not None ]
+ if len(conds):
+ self.condition = "(%s)" % ' & '.join(conds)
+ self.filter = set()
+ for t in self.terms:
+ if t.filter is not None:
+ self.filter |= t.filter
def generate(self, where):
- # and condictions
- for c in where:
- op = c.get('op', None)
- value = c['value']
- field = c['field']
-
- if field == 'index' and self.index_kind == 'datetime64':
- val = lib.Timestamp(value).value
- self.conditions.append('(%s %s %s)' % (field, op, val))
- elif field == 'index' and isinstance(value, datetime):
- value = time.mktime(value.timetuple())
- self.conditions.append('(%s %s %s)' % (field, op, value))
- else:
- self.generate_multiple_conditions(op, value, field)
+ """ generate and return the terms """
+ if where is None: return None
- if len(self.conditions):
- self.the_condition = '(' + ' & '.join(self.conditions) + ')'
+ if not isinstance(where, (list,tuple)):
+ where = [ where ]
- def generate_multiple_conditions(self, op, value, field):
-
- if op and op == 'in' or isinstance(value, (list, np.ndarray)):
- if len(value) <= 61:
- l = '(' + ' | '.join([ "(%s == '%s')" % (field, v)
- for v in value]) + ')'
- self.conditions.append(l)
- else:
- self.column_filter = set(value)
- else:
- if op is None:
- op = '=='
- self.conditions.append('(%s %s "%s")' % (field, op, value))
+ return [ Term(c, index_kind = self.index_kind) for c in where ]
def select(self):
"""
generate the selection
"""
- if self.the_condition:
- self.values = self.table.readWhere(self.the_condition)
-
+ if self.condition is not None:
+ self.values = self.table.readWhere(self.condition)
else:
self.values = self.table.read()
@@ -1168,7 +1298,7 @@ def select_coords(self):
"""
generate the selection
"""
- self.values = self.table.getWhereList(self.the_condition)
+ self.values = self.table.getWhereList(self.condition)
def _get_index_factory(klass):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 9442f274a7810..3ccab8616127a 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -10,7 +10,7 @@
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, Index)
-from pandas.io.pytables import HDFStore, get_store
+from pandas.io.pytables import HDFStore, get_store, Term
import pandas.util.testing as tm
from pandas.tests.test_series import assert_series_equal
from pandas.tests.test_frame import assert_frame_equal
@@ -177,11 +177,7 @@ def test_remove(self):
self.assertEquals(len(self.store), 0)
def test_remove_where_not_exist(self):
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : 'foo'
- }
+ crit1 = Term('index','>','foo')
self.store.remove('a', where=[crit1])
def test_remove_crit(self):
@@ -189,21 +185,55 @@ def test_remove_crit(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = Term('index','>',date)
+ crit2 = Term('column',['A', 'D'])
self.store.remove('wp', where=[crit1])
self.store.remove('wp', where=[crit2])
result = self.store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
tm.assert_panel_equal(result, expected)
+ # test non-consecutive row removal
+ wp = tm.makePanel()
+ self.store.put('wp2', wp, table=True)
+
+ date1 = wp.major_axis[1:3]
+ date2 = wp.major_axis[5]
+ date3 = [wp.major_axis[7],wp.major_axis[9]]
+
+ crit1 = Term('index',date1)
+ crit2 = Term('index',date2)
+ crit3 = Term('index',date3)
+
+ self.store.remove('wp2', where=[crit1])
+ self.store.remove('wp2', where=[crit2])
+ self.store.remove('wp2', where=[crit3])
+ result = self.store['wp2']
+
+ ma = list(wp.major_axis)
+ for d in date1:
+ ma.remove(d)
+ ma.remove(date2)
+ for d in date3:
+ ma.remove(d)
+ expected = wp.reindex(major = ma)
+ tm.assert_panel_equal(result, expected)
+
+ def test_terms(self):
+
+ Term(dict(field = 'index', op = '>', value = '20121114'))
+ Term('index', '20121114')
+ Term('index', '>', '20121114')
+ Term('index', ['20121114','20121114'])
+ Term('index', datetime(2012,11,14))
+ Term('index>20121114')
+
+ self.assertRaises(Exception, Term.__init__)
+ self.assertRaises(Exception, Term.__init__, 'blah')
+ self.assertRaises(Exception, Term.__init__, 'index')
+ self.assertRaises(Exception, Term.__init__, 'index', '==')
+ self.assertRaises(Exception, Term.__init__, 'index', '>', 5)
+
def test_series(self):
s = tm.makeStringSeries()
self._check_roundtrip(s, tm.assert_series_equal)
@@ -528,15 +558,8 @@ def test_panel_select(self):
self.store.put('wp', wp, table=True)
date = wp.major_axis[len(wp.major_axis) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column', ['A', 'D'])
result = self.store.select('wp', [crit1, crit2])
expected = wp.truncate(before=date).reindex(minor=['A', 'D'])
@@ -547,19 +570,9 @@ def test_frame_select(self):
self.store.put('frame', df, table=True)
date = df.index[len(df) // 2]
- crit1 = {
- 'field' : 'index',
- 'op' : '>=',
- 'value' : date
- }
- crit2 = {
- 'field' : 'column',
- 'value' : ['A', 'D']
- }
- crit3 = {
- 'field' : 'column',
- 'value' : 'A'
- }
+ crit1 = ('index','>=',date)
+ crit2 = ('column',['A', 'D'])
+ crit3 = ('column','A')
result = self.store.select('frame', [crit1, crit2])
expected = df.ix[date:, ['A', 'D']]
@@ -580,10 +593,7 @@ def test_select_filter_corner(self):
df.columns = ['%.3d' % c for c in df.columns]
self.store.put('frame', df, table=True)
- crit = {
- 'field' : 'column',
- 'value' : df.columns[:75]
- }
+ crit = Term('column', df.columns[:75])
result = self.store.select('frame', [crit])
tm.assert_frame_equal(result, df.ix[:, df.columns[:75]])
| in pandas/io/pytables.py
1. added **str** (to do **repr**)
2. row removal in tables is much faster if rows are consecutive
3. added Term class, refactored Selection (this is backdwards compatible)
Term is a concise way of specifying conditions for queries, e.g.
```
Term(dict(field = 'index', op = '>', value = '20121114'))
Term('index', '20121114')
Term('index', '>', '20121114')
Term('index', ['20121114','20121114'])
Term('index', datetime(2012,11,14))
Term('index>20121114')
```
updated tests for same
this should close GH #1996
also potentially a starting point to do things like this:
df['df>0']...e.g allowing simple string expressions in criteria (Term is indpendent; I could move to a separete module.... - just a thought)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2261 | 2012-11-15T19:09:12Z | 2012-11-15T20:35:39Z | null | 2014-06-19T00:02:49Z |
Fixes - Block.repr unicode and docstring tweak | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index dd96cd2b10ae4..c176bbdaa3c46 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -899,6 +899,7 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
else:
arrays, columns = _to_arrays(data, columns,
coerce_float=coerce_float)
+ columns=list(columns) # _to_arrays returns index, but we might mutate
sdict = dict(zip(columns, arrays))
if exclude is None:
@@ -5211,7 +5212,7 @@ def _list_of_dict_to_arrays(data, columns, coerce_float=False):
def _convert_object_array(content, columns, coerce_float=False):
if columns is None:
- columns = range(len(content))
+ columns = _default_index(len(content))
else:
if len(columns) != len(content):
raise AssertionError('%d columns passed, passed data had %s '
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 9638da8f418cf..6098d0a379814 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -725,13 +725,12 @@ def get_indexer(self, target, method=None, limit=None):
Examples
--------
- >>> indexer, mask = index.get_indexer(new_index)
+ >>> indexer = index.get_indexer(new_index)
>>> new_values = cur_values.take(indexer)
- >>> new_values[-mask] = np.nan
Returns
-------
- (indexer, mask) : (ndarray, ndarray)
+ indexer : ndarray
"""
method = self._get_method(method)
target = _ensure_index(target)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 7275a54a4faae..fdec82caa9736 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -8,6 +8,8 @@
import pandas.core.common as com
import pandas.lib as lib
+from pandas.util import py3compat
+
class Block(object):
"""
Canonical n-dimensional unit of homogeneous dtype contained in a pandas data
@@ -51,8 +53,11 @@ def set_ref_items(self, ref_items, maybe_rename=True):
def __repr__(self):
shape = ' x '.join([com.pprint_thing(s) for s in self.shape])
name = type(self).__name__
- result = '%s: %s, %s, dtype %s' % (name, self.items, shape, self.dtype)
- return com.console_encode(result) # repr must return byte-string
+ result = '%s: %s, %s, dtype %s' % (name, com.pprint_thing(self.items)
+ , shape, self.dtype)
+ if py3compat.PY3:
+ return unicode(result)
+ return com.console_encode(result)
def __contains__(self, item):
return item in self.items
| https://api.github.com/repos/pandas-dev/pandas/pulls/2256 | 2012-11-15T14:31:05Z | 2012-11-18T22:00:33Z | null | 2014-06-27T13:41:00Z | |
update 0.9.1 whatsnew to add where/mask commentary | diff --git a/doc/source/v0.9.1.txt b/doc/source/v0.9.1.txt
index 6cf6bb0ed7274..10599d2e99727 100644
--- a/doc/source/v0.9.1.txt
+++ b/doc/source/v0.9.1.txt
@@ -42,14 +42,44 @@ New features
- DataFrame has new `where` and `mask` methods to select values according to a
given boolean mask (GH2109_, GH2151_)
- .. ipython:: python
+ DataFrame currently supports slicing via a boolean vector the same length as the DataFrame (inside the `[]`).
+ The returned DataFrame has the same number of columns as the original, but is sliced on its index.
+
+ .. ipython:: python
+
+ df = DataFrame(np.random.randn(5, 3), columns = ['A','B','C'])
+
+ df
+
+ df[df['A'] > 0]
+
+ If a DataFrame is sliced with a DataFrame based boolean condition (with the same size as the original DataFrame),
+ then a DataFrame the same size (index and columns) as the original is returned, with
+ elements that do not meet the boolean condition as `NaN`. This is accomplished via
+ the new method `DataFrame.where`. In addition, `where` takes an optional `other` argument for replacement.
+
+ .. ipython:: python
+
+ df[df>0]
+
+ df.where(df>0)
+
+ df.where(df>0,-df)
+
+ Furthermore, `where` now aligns the input boolean condition (ndarray or DataFrame), such that partial selection
+ with setting is possible. This is analagous to partial setting via `.ix` (but on the contents rather than the axis labels)
+
+ .. ipython:: python
- df = DataFrame(np.random.randn(5, 3))
+ df2 = df.copy()
+ df2[ df2[1:4] > 0 ] = 3
+ df2
- df.where(df > 0, -df)
+ `DataFrame.mask` is the inverse boolean operation of `where`.
- df.mask(df < 0)
+ .. ipython:: python
+ df.mask(df<=0)
- Enable referencing of Excel columns by their column names (GH1936_)
| update whatsnew docs for where/mask (prob could be added to main docs at some point)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2254 | 2012-11-15T02:03:33Z | 2012-11-15T03:14:09Z | null | 2014-06-26T19:47:36Z |
CLN: df.iteritems() can use self.icol() | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index dd96cd2b10ae4..9ce9b303648f8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -637,7 +637,7 @@ def keys(self):
def iteritems(self):
"""Iterator over (column, series) pairs"""
for i, k in enumerate(self.columns):
- yield (k,self.take([i],axis=1)[k])
+ yield (k,self.icol(i))
def iterrows(self):
"""
| now that all the issues have been fixed.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2253 | 2012-11-15T00:46:32Z | 2012-11-17T02:55:43Z | null | 2012-11-17T02:55:45Z |
BUG: icol() should propegate fill_value for sparse data frames #2249 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d91be2d1f36c1..4b506a23f9fb2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1734,8 +1734,15 @@ def icol(self, i):
return self.ix[:, i]
values = self._data.iget(i)
- return self._col_klass.from_array(values, index=self.index,
- name=label)
+ if hasattr(self,'default_fill_value'):
+ s = self._col_klass.from_array(values, index=self.index,
+ name=label,
+ fill_value= self.default_fill_value)
+ else:
+ s = self._col_klass.from_array(values, index=self.index,
+ name=label)
+
+ return s
def _ixs(self, i, axis=0):
if axis == 0:
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 9e4228d3ef25f..8be9e2b5c7d75 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -147,11 +147,11 @@ def __new__(cls, data, index=None, sparse_index=None, kind='block',
return output
@classmethod
- def from_array(cls, arr, index=None, name=None, copy=False):
+ def from_array(cls, arr, index=None, name=None, copy=False,fill_value=None):
"""
Simplified alternate constructor
"""
- return SparseSeries(arr, index=index, name=name, copy=copy)
+ return SparseSeries(arr, index=index, name=name, copy=copy,fill_value=fill_value)
def __init__(self, data, index=None, sparse_index=None, kind='block',
fill_value=None, name=None, copy=False):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 7787a5c6e4fc7..2db2851bb6abc 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1327,6 +1327,11 @@ def test_irow_icol_duplicates(self):
xp = df.ix[:, [0]]
assert_frame_equal(rs, xp)
+ def test_icol_sparse_propegate_fill_value(self):
+ from pandas.sparse.api import SparseDataFrame
+ df=SparseDataFrame({'A' : [999,1]},default_fill_value=999)
+ self.assertTrue( len(df['A'].sp_values) == len(df.icol(0).sp_values))
+
def test_iget_value(self):
for i, row in enumerate(self.frame.index):
for j, col in enumerate(self.frame.columns):
| #2249
| https://api.github.com/repos/pandas-dev/pandas/pulls/2250 | 2012-11-14T20:46:34Z | 2012-11-15T00:38:09Z | null | 2014-07-17T12:26:16Z |
ENH: Import DataReader and Options into namespace | diff --git a/pandas/__init__.py b/pandas/__init__.py
index 3760e3fbc434b..49837caf81f58 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -30,6 +30,7 @@
from pandas.io.parsers import (read_csv, read_table, read_clipboard,
read_fwf, to_clipboard, ExcelFile,
ExcelWriter)
+from pandas.io.data import DataReader, Options
from pandas.io.pytables import HDFStore
from pandas.util.testing import debug
| Don't know if there's any reason not to import these. I thought maybe they'd be better off in the data namespace though. Thoughts?
| https://api.github.com/repos/pandas-dev/pandas/pulls/2246 | 2012-11-14T16:14:20Z | 2012-11-14T16:19:19Z | null | 2012-11-14T16:19:19Z |
BUG: ExcelWriter raises exception on PeriodIndex #2240 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 31c1a09f409c3..deed613c2d992 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1174,8 +1174,12 @@ def _helper_csvexcel(self, writer, na_rep=None, cols=None,
encoded_cols = list(cols)
writer.writerow(encoded_cols)
- nlevels = getattr(self.index, 'nlevels', 1)
- for j, idx in enumerate(self.index):
+ data_index = self.index
+ if isinstance(self.index, PeriodIndex):
+ data_index = self.index.to_timestamp()
+
+ nlevels = getattr(data_index, 'nlevels', 1)
+ for j, idx in enumerate(data_index):
row_fields = []
if index:
if nlevels == 1:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 0b36e8d39a00a..a8c76937b6b4e 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3780,6 +3780,24 @@ def test_to_excel_from_excel(self):
assert_frame_equal(frame, recons)
os.remove(path)
+ def test_to_excel_periodindex(self):
+ try:
+ import xlwt
+ import xlrd
+ import openpyxl
+ except ImportError:
+ raise nose.SkipTest
+
+ for ext in ['xls', 'xlsx']:
+ path = '__tmp__.' + ext
+ frame = self.tsframe
+ xp = frame.resample('M', kind='period')
+ xp.to_excel(path, 'sht1')
+
+ reader = ExcelFile(path)
+ rs = reader.parse('sht1', index_col=0, parse_dates=True)
+ assert_frame_equal(xp, rs.to_period('M'))
+ os.remove(path)
def test_to_excel_multiindex(self):
try:
| convert to_timestamp first
otherwise roundtripping is hard and the output isn't all that useful for processing in Excel
| https://api.github.com/repos/pandas-dev/pandas/pulls/2244 | 2012-11-14T05:37:28Z | 2012-11-14T16:55:39Z | null | 2014-07-15T12:47:03Z |
Panelnd | diff --git a/pandas/core/api.py b/pandas/core/api.py
old mode 100644
new mode 100755
index 8cf3b7f4cbda4..6cbdae430ba0b
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -4,7 +4,6 @@
import numpy as np
from pandas.core.algorithms import factorize, match, unique, value_counts
-
from pandas.core.common import isnull, notnull, save, load
from pandas.core.categorical import Categorical, Factor
from pandas.core.format import (set_printoptions, reset_printoptions,
@@ -14,6 +13,7 @@
from pandas.core.series import Series, TimeSeries
from pandas.core.frame import DataFrame
from pandas.core.panel import Panel
+from pandas.core.panel4d import Panel4D
from pandas.core.groupby import groupby
from pandas.core.reshape import (pivot_simple as pivot, get_dummies,
lreshape)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 0cfb4004708fa..fe44cfaa21107 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -240,11 +240,16 @@ def _multi_take_opportunity(self, tup):
def _multi_take(self, tup):
from pandas.core.frame import DataFrame
from pandas.core.panel import Panel
+ from pandas.core.panel4d import Panel4D
if isinstance(self.obj, DataFrame):
index = self._convert_for_reindex(tup[0], axis=0)
columns = self._convert_for_reindex(tup[1], axis=1)
return self.obj.reindex(index=index, columns=columns)
+ elif isinstance(self.obj, Panel4D):
+ conv = [self._convert_for_reindex(x, axis=i)
+ for i, x in enumerate(tup)]
+ return self.obj.reindex(labels=tup[0],items=tup[1], major=tup[2], minor=tup[3])
elif isinstance(self.obj, Panel):
conv = [self._convert_for_reindex(x, axis=i)
for i, x in enumerate(tup)]
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 42adf0420db0d..4f5203b103ce7 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -6,7 +6,6 @@
import operator
import sys
import numpy as np
-
from pandas.core.common import (PandasError, _mut_exclusive,
_try_sort, _default_index, _infer_dtype)
from pandas.core.categorical import Factor
@@ -147,31 +146,45 @@ def f(self, other, axis='items'):
class Panel(NDFrame):
- _AXIS_NUMBERS = {
- 'items': 0,
- 'major_axis': 1,
- 'minor_axis': 2
- }
-
- _AXIS_ALIASES = {
- 'major': 'major_axis',
- 'minor': 'minor_axis'
- }
-
- _AXIS_NAMES = {
- 0: 'items',
- 1: 'major_axis',
- 2: 'minor_axis'
+ _AXIS_ORDERS = ['items','major_axis','minor_axis']
+ _AXIS_NUMBERS = dict([ (a,i) for i, a in enumerate(_AXIS_ORDERS) ])
+ _AXIS_ALIASES = {
+ 'major' : 'major_axis',
+ 'minor' : 'minor_axis'
}
+ _AXIS_NAMES = dict([ (i,a) for i, a in enumerate(_AXIS_ORDERS) ])
+ _AXIS_SLICEMAP = {
+ 'major_axis' : 'index',
+ 'minor_axis' : 'columns'
+ }
+ _AXIS_LEN = len(_AXIS_ORDERS)
# major
_default_stat_axis = 1
- _het_axis = 0
+
+ # info axis
+ _het_axis = 0
+ _info_axis = _AXIS_ORDERS[_het_axis]
items = lib.AxisProperty(0)
major_axis = lib.AxisProperty(1)
minor_axis = lib.AxisProperty(2)
+ @property
+ def _constructor(self):
+ return type(self)
+
+ # return the type of the slice constructor
+ _constructor_sliced = DataFrame
+
+ def _construct_axes_dict(self, axes = None):
+ """ return an axes dictionary for myself """
+ return dict([ (a,getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+
+ def _construct_axes_dict_for_slice(self, axes = None):
+ """ return an axes dictionary for myself """
+ return dict([ (self._AXIS_SLICEMAP[a],getattr(self,a)) for a in (axes or self._AXIS_ORDERS) ])
+
__add__ = _arith_method(operator.add, '__add__')
__sub__ = _arith_method(operator.sub, '__sub__')
__truediv__ = _arith_method(operator.truediv, '__truediv__')
@@ -209,10 +222,15 @@ def __init__(self, data=None, items=None, major_axis=None, minor_axis=None,
copy : boolean, default False
Copy data from inputs. Only affects DataFrame / 2d ndarray input
"""
+ self._init_data( data=data, items=items, major_axis=major_axis, minor_axis=minor_axis,
+ copy=copy, dtype=dtype)
+
+ def _init_data(self, data, copy, dtype, **kwargs):
+ """ generate ND initialization; axes are passed as required objects to __init__ """
if data is None:
data = {}
- passed_axes = [items, major_axis, minor_axis]
+ passed_axes = [ kwargs.get(a) for a in self._AXIS_ORDERS ]
axes = None
if isinstance(data, BlockManager):
if any(x is not None for x in passed_axes):
@@ -238,48 +256,47 @@ def _from_axes(cls, data, axes):
if isinstance(data, BlockManager):
return cls(data)
else:
- items, major, minor = axes
- return cls(data, items=items, major_axis=major,
- minor_axis=minor, copy=False)
+ d = dict([ (i, a) for i, a in zip(cls._AXIS_ORDERS,axes) ])
+ d['copy'] = False
+ return cls(data, **d)
def _init_dict(self, data, axes, dtype=None):
- items, major, minor = axes
+ haxis = axes.pop(self._het_axis)
- # prefilter if items passed
- if items is not None:
- items = _ensure_index(items)
- data = dict((k, v) for k, v in data.iteritems() if k in items)
+ # prefilter if haxis passed
+ if haxis is not None:
+ haxis = _ensure_index(haxis)
+ data = dict((k, v) for k, v in data.iteritems() if k in haxis)
else:
- items = Index(_try_sort(data.keys()))
+ haxis = Index(_try_sort(data.keys()))
for k, v in data.iteritems():
if isinstance(v, dict):
- data[k] = DataFrame(v)
+ data[k] = self._constructor_sliced(v)
- if major is None:
- major = _extract_axis(data, axis=0)
+ # extract axis for remaining axes & create the slicemap
+ raxes = [ self._extract_axis(self, data, axis=i) if a is None else a for i, a in enumerate(axes) ]
+ raxes_sm = self._extract_axes_for_slice(self, raxes)
- if minor is None:
- minor = _extract_axis(data, axis=1)
-
- axes = [items, major, minor]
+ # shallow copy
arrays = []
-
- item_shape = len(major), len(minor)
- for item in items:
- v = values = data.get(item)
+ reshaped_data = data.copy()
+ haxis_shape = [ len(a) for a in raxes ]
+ for h in haxis:
+ v = values = data.get(h)
if v is None:
- values = np.empty(item_shape, dtype=dtype)
+ values = np.empty(haxis_shape, dtype=dtype)
values.fill(np.nan)
- elif isinstance(v, DataFrame):
- v = v.reindex(index=major, columns=minor, copy=False)
+ elif isinstance(v, self._constructor_sliced):
+ d = raxes_sm.copy()
+ d['copy'] = False
+ v = v.reindex(**d)
if dtype is not None:
v = v.astype(dtype)
values = v.values
-
arrays.append(values)
- return self._init_arrays(arrays, items, axes)
+ return self._init_arrays(arrays, haxis, [ haxis ] + raxes)
def _init_arrays(self, arrays, arr_names, axes):
# segregates dtypes and forms blocks matching to columns
@@ -289,7 +306,7 @@ def _init_arrays(self, arrays, arr_names, axes):
@property
def shape(self):
- return len(self.items), len(self.major_axis), len(self.minor_axis)
+ return [ len(getattr(self,a)) for a in self._AXIS_ORDERS ]
@classmethod
def from_dict(cls, data, intersect=False, orient='items', dtype=None):
@@ -326,32 +343,33 @@ def from_dict(cls, data, intersect=False, orient='items', dtype=None):
elif orient != 'items': # pragma: no cover
raise ValueError('only recognize items or minor for orientation')
- data, index, columns = _homogenize_dict(data, intersect=intersect,
- dtype=dtype)
- items = Index(sorted(data.keys()))
- return cls(data, items, index, columns)
+ d = cls._homogenize_dict(cls, data, intersect=intersect, dtype=dtype)
+ d[cls._info_axis] = Index(sorted(d['data'].keys()))
+ return cls(**d)
def __getitem__(self, key):
- if isinstance(self.items, MultiIndex):
+ if isinstance(getattr(self,self._info_axis), MultiIndex):
return self._getitem_multilevel(key)
return super(Panel, self).__getitem__(key)
def _getitem_multilevel(self, key):
- loc = self.items.get_loc(key)
+ info = getattr(self,self._info_axis)
+ loc = info.get_loc(key)
if isinstance(loc, (slice, np.ndarray)):
- new_index = self.items[loc]
+ new_index = info[loc]
result_index = _maybe_droplevels(new_index, key)
- new_values = self.values[loc, :, :]
- result = Panel(new_values,
- items=result_index,
- major_axis=self.major_axis,
- minor_axis=self.minor_axis)
+ slices = [loc] + [slice(None) for x in range(self._AXIS_LEN-1)]
+ new_values = self.values[slices]
+
+ d = self._construct_axes_dict(self._AXIS_ORDERS[1:])
+ d[self._info_axis] = result_index
+ result = self._constructor(new_values, **d)
return result
else:
return self._get_item_cache(key)
def _init_matrix(self, data, axes, dtype=None, copy=False):
- values = _prep_ndarray(data, copy=copy)
+ values = self._prep_ndarray(self, data, copy=copy)
if dtype is not None:
try:
@@ -379,9 +397,9 @@ def __array__(self, dtype=None):
return self.values
def __array_wrap__(self, result):
- return self._constructor(result, items=self.items,
- major_axis=self.major_axis,
- minor_axis=self.minor_axis, copy=False)
+ d = self._construct_axes_dict(self._AXIS_ORDERS)
+ d['copy'] = False
+ return self._constructor(result, **d)
#----------------------------------------------------------------------
# Magic methods
@@ -389,37 +407,26 @@ def __array_wrap__(self, result):
def __repr__(self):
class_name = str(self.__class__)
- I, N, K = len(self.items), len(self.major_axis), len(self.minor_axis)
+ shape = self.shape
+ dims = 'Dimensions: %s' % ' x '.join([ "%d (%s)" % (s, a) for a,s in zip(self._AXIS_ORDERS,shape) ])
- dims = 'Dimensions: %d (items) x %d (major) x %d (minor)' % (I, N, K)
-
- if len(self.major_axis) > 0:
- major = 'Major axis: %s to %s' % (self.major_axis[0],
- self.major_axis[-1])
- else:
- major = 'Major axis: None'
-
- if len(self.minor_axis) > 0:
- minor = 'Minor axis: %s to %s' % (self.minor_axis[0],
- self.minor_axis[-1])
- else:
- minor = 'Minor axis: None'
-
- if len(self.items) > 0:
- items = 'Items: %s to %s' % (self.items[0], self.items[-1])
- else:
- items = 'Items: None'
-
- output = '%s\n%s\n%s\n%s\n%s' % (class_name, dims, items, major, minor)
+ def axis_pretty(a):
+ v = getattr(self,a)
+ if len(v) > 0:
+ return '%s axis: %s to %s' % (a.capitalize(),v[0],v[-1])
+ else:
+ return '%s axis: None' % a.capitalize()
+
+ output = '\n'.join([class_name, dims] + [axis_pretty(a) for a in self._AXIS_ORDERS])
return output
def __iter__(self):
- return iter(self.items)
+ return iter(getattr(self,self._info_axis))
def iteritems(self):
- for item in self.items:
- yield item, self[item]
+ for h in getattr(self,self._info_axis):
+ yield h, self[h]
# Name that won't get automatically converted to items by 2to3. items is
# already in use for the first axis.
@@ -443,10 +450,6 @@ def _get_plane_axes(self, axis):
return index, columns
- @property
- def _constructor(self):
- return type(self)
-
# Fancy indexing
_ix = None
@@ -516,7 +519,7 @@ def _get_values(self):
#----------------------------------------------------------------------
# Getting and setting elements
- def get_value(self, item, major, minor):
+ def get_value(self, *args):
"""
Quickly retrieve single value at (item, major, minor) location
@@ -530,11 +533,14 @@ def get_value(self, item, major, minor):
-------
value : scalar value
"""
+ # require an arg for each axis
+ assert(len(args) == self._AXIS_LEN)
+
# hm, two layers to the onion
- frame = self._get_item_cache(item)
- return frame.get_value(major, minor)
+ frame = self._get_item_cache(args[0])
+ return frame.get_value(*args[1:])
- def set_value(self, item, major, minor, value):
+ def set_value(self, *args):
"""
Quickly set single value at (item, major, minor) location
@@ -551,30 +557,35 @@ def set_value(self, item, major, minor, value):
If label combo is contained, will be reference to calling Panel,
otherwise a new object
"""
+ # require an arg for each axis and the value
+ assert(len(args) == self._AXIS_LEN+1)
+
try:
- frame = self._get_item_cache(item)
- frame.set_value(major, minor, value)
+ frame = self._get_item_cache(args[0])
+ frame.set_value(*args[1:])
return self
except KeyError:
- ax1, ax2, ax3 = self._expand_axes((item, major, minor))
- result = self.reindex(items=ax1, major=ax2, minor=ax3, copy=False)
+ axes = self._expand_axes(args)
+ d = dict([ (a,ax) for a,ax in zip(self._AXIS_ORDERS,axes) ])
+ d['copy'] = False
+ result = self.reindex(**d)
- likely_dtype = com._infer_dtype(value)
- made_bigger = not np.array_equal(ax1, self.items)
+ likely_dtype = com._infer_dtype(args[-1])
+ made_bigger = not np.array_equal(axes[0], getattr(self,self._info_axis))
# how to make this logic simpler?
if made_bigger:
- com._possibly_cast_item(result, item, likely_dtype)
+ com._possibly_cast_item(result, args[0], likely_dtype)
- return result.set_value(item, major, minor, value)
+ return result.set_value(*args)
def _box_item_values(self, key, values):
- return DataFrame(values, index=self.major_axis,
- columns=self.minor_axis)
+ d = self._construct_axes_dict_for_slice(self._AXIS_ORDERS[1:])
+ return self._constructor_sliced(values, **d)
def __getattr__(self, name):
"""After regular attribute access, try looking up the name of an item.
This allows simpler access to items for interactive use."""
- if name in self.items:
+ if name in getattr(self,self._info_axis):
return self[name]
raise AttributeError("'%s' object has no attribute '%s'" %
(type(self).__name__, name))
@@ -584,22 +595,21 @@ def _slice(self, slobj, axis=0):
return self._constructor(new_data)
def __setitem__(self, key, value):
- _, N, K = self.shape
- if isinstance(value, DataFrame):
- value = value.reindex(index=self.major_axis,
- columns=self.minor_axis)
+ shape = tuple(self.shape)
+ if isinstance(value, self._constructor_sliced):
+ value = value.reindex(**self._construct_axes_dict_for_slice(self._AXIS_ORDERS[1:]))
mat = value.values
elif isinstance(value, np.ndarray):
- assert(value.shape == (N, K))
+ assert(value.shape == shape[1:])
mat = np.asarray(value)
elif np.isscalar(value):
dtype = _infer_dtype(value)
- mat = np.empty((N, K), dtype=dtype)
+ mat = np.empty(shape[1:], dtype=dtype)
mat.fill(value)
else:
raise TypeError('Cannot set item of type: %s' % str(type(value)))
- mat = mat.reshape((1, N, K))
+ mat = mat.reshape(tuple([1]) + shape[1:])
NDFrame._set_item(self, key, mat)
def pop(self, item):
@@ -660,11 +670,11 @@ def conform(self, frame, axis='items'):
-------
DataFrame
"""
- index, columns = self._get_plane_axes(axis)
- return frame.reindex(index=index, columns=columns)
+ axes = self._get_plane_axes(axis)
+ return frame.reindex(**self._extract_axes_for_slice(self, axes))
- def reindex(self, major=None, items=None, minor=None, method=None,
- major_axis=None, minor_axis=None, copy=True):
+ def reindex(self, major=None, minor=None, method=None,
+ major_axis=None, minor_axis=None, copy=True, **kwargs):
"""
Conform panel to new axis or axes
@@ -691,19 +701,24 @@ def reindex(self, major=None, items=None, minor=None, method=None,
major = _mut_exclusive(major, major_axis)
minor = _mut_exclusive(minor, minor_axis)
+ al = self._AXIS_LEN
- if (method is None and not self._is_mixed_type and
- com._count_not_none(items, major, minor) == 3):
- return self._reindex_multi(items, major, minor)
+ # only allowing multi-index on Panel (and not > dims)
+ if (method is None and not self._is_mixed_type and al <= 3):
+ items = kwargs.get('items')
+ if com._count_not_none(items, major, minor) == 3:
+ return self._reindex_multi(items, major, minor)
if major is not None:
- result = result._reindex_axis(major, method, 1, copy)
+ result = result._reindex_axis(major, method, al-2, copy)
if minor is not None:
- result = result._reindex_axis(minor, method, 2, copy)
+ result = result._reindex_axis(minor, method, al-1, copy)
- if items is not None:
- result = result._reindex_axis(items, method, 0, copy)
+ for i, a in enumerate(self._AXIS_ORDERS[0:al-2]):
+ a = kwargs.get(a)
+ if a is not None:
+ result = result._reindex_axis(a, method, i, copy)
if result is self and copy:
raise ValueError('Must specify at least one axis')
@@ -768,8 +783,7 @@ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True):
return self._reindex_axis(labels, method, axis, copy)
def reindex_like(self, other, method=None):
- """
- Reindex Panel to match indices of another Panel
+ """ return an object with matching indicies to myself
Parameters
----------
@@ -780,9 +794,9 @@ def reindex_like(self, other, method=None):
-------
reindexed : Panel
"""
- # todo: object columns
- return self.reindex(major=other.major_axis, items=other.items,
- minor=other.minor_axis, method=method)
+ d = other._construct_axes_dict()
+ d['method'] = method
+ return self.reindex(**d)
def dropna(self, axis=0, how='any'):
"""
@@ -826,8 +840,8 @@ def _combine(self, other, func, axis=0):
return self._combine_frame(other, func, axis=axis)
elif np.isscalar(other):
new_values = func(self.values, other)
- return self._constructor(new_values, self.items, self.major_axis,
- self.minor_axis)
+ d = self._construct_axes_dict()
+ return self._constructor(new_values, **d)
def __neg__(self):
return -1 * self
@@ -924,7 +938,7 @@ def major_xs(self, key, copy=True):
y : DataFrame
index -> minor axis, columns -> items
"""
- return self.xs(key, axis=1, copy=copy)
+ return self.xs(key, axis=self._AXIS_LEN-2, copy=copy)
def minor_xs(self, key, copy=True):
"""
@@ -942,7 +956,7 @@ def minor_xs(self, key, copy=True):
y : DataFrame
index -> major axis, columns -> items
"""
- return self.xs(key, axis=2, copy=copy)
+ return self.xs(key, axis=self._AXIS_LEN-1, copy=copy)
def xs(self, key, axis=1, copy=True):
"""
@@ -956,7 +970,7 @@ def xs(self, key, axis=1, copy=True):
Returns
-------
- y : DataFrame
+ y : ndim(self)-1
"""
if axis == 0:
data = self[key]
@@ -967,7 +981,7 @@ def xs(self, key, axis=1, copy=True):
self._consolidate_inplace()
axis_number = self._get_axis_number(axis)
new_data = self._data.xs(key, axis=axis_number, copy=copy)
- return DataFrame(new_data)
+ return self._constructor_sliced(new_data)
def _ixs(self, i, axis=0):
# for compatibility with .ix indexing
@@ -1010,15 +1024,14 @@ def swapaxes(self, axis1='major', axis2='minor', copy=True):
mapping = {i: j, j: i}
new_axes = (self._get_axis(mapping.get(k, k))
- for k in range(3))
+ for k in range(self._AXIS_LEN))
new_values = self.values.swapaxes(i, j)
if copy:
new_values = new_values.copy()
return self._constructor(new_values, *new_axes)
- def transpose(self, items='items', major='major', minor='minor',
- copy=False):
+ def transpose(self, *args, **kwargs):
"""
Permute the dimensions of the Panel
@@ -1040,16 +1053,27 @@ def transpose(self, items='items', major='major', minor='minor',
-------
y : Panel (new object)
"""
- i, j, k = [self._get_axis_number(x) for x in [items, major, minor]]
- if i == j or i == k or j == k:
- raise ValueError('Must specify 3 unique axes')
-
- new_axes = [self._get_axis(x) for x in [i, j, k]]
- new_values = self.values.transpose((i, j, k))
- if copy:
+ # construct the args
+ args = list(args)
+ for a in self._AXIS_ORDERS:
+ if not a in kwargs:
+ try:
+ kwargs[a] = args.pop(0)
+ except (IndexError):
+ raise ValueError("not enough arguments specified to transpose!")
+
+ axes = [self._get_axis_number(kwargs[a]) for a in self._AXIS_ORDERS]
+
+ # we must have unique axes
+ if len(axes) != len(set(axes)):
+ raise ValueError('Must specify %s unique axes' % self._AXIS_LEN)
+
+ new_axes = dict([ (a,self._get_axis(x)) for a, x in zip(self._AXIS_ORDERS,axes)])
+ new_values = self.values.transpose(tuple(axes))
+ if kwargs.get('copy') or (len(args) and args[-1]):
new_values = new_values.copy()
- return self._constructor(new_values, *new_axes)
+ return self._constructor(new_values, **new_axes)
def to_frame(self, filter_observations=True):
"""
@@ -1140,20 +1164,24 @@ def _reduce(self, op, axis=0, skipna=True):
result = f(self.values)
- index, columns = self._get_plane_axes(axis_name)
- if axis_name != 'items':
+ axes = self._get_plane_axes(axis_name)
+ if result.ndim == 2 and axis_name != self._info_axis:
result = result.T
- return DataFrame(result, index=index, columns=columns)
+ return self._constructor_sliced(result, **self._extract_axes_for_slice(self, axes))
def _wrap_result(self, result, axis):
axis = self._get_axis_name(axis)
- index, columns = self._get_plane_axes(axis)
-
- if axis != 'items':
+ axes = self._get_plane_axes(axis)
+ if result.ndim == 2 and axis != self._info_axis:
result = result.T
- return DataFrame(result, index=index, columns=columns)
+ # do we have reduced dimensionalility?
+ if self.ndim == result.ndim:
+ return self._constructor(result, **self._construct_axes_dict())
+ elif self.ndim == result.ndim+1:
+ return self._constructor_sliced(result, **self._extract_axes_for_slice(self, axes))
+ raise PandasError("invalid _wrap_result [self->%s] [result->%s]" % (self.ndim,result.ndim))
def count(self, axis='major'):
"""
@@ -1381,71 +1409,83 @@ def _get_join_index(self, other, how):
join_minor = self.minor_axis.union(other.minor_axis)
return join_major, join_minor
-WidePanel = Panel
-LongPanel = DataFrame
-
-
-def _prep_ndarray(values, copy=True):
- if not isinstance(values, np.ndarray):
- values = np.asarray(values)
- # NumPy strings are a pain, convert to object
- if issubclass(values.dtype.type, basestring):
- values = np.array(values, dtype=object, copy=True)
- else:
- if copy:
- values = values.copy()
- assert(values.ndim == 3)
- return values
-
-
-def _homogenize_dict(frames, intersect=True, dtype=None):
- """
- Conform set of DataFrame-like objects to either an intersection
- of indices / columns or a union.
-
- Parameters
- ----------
- frames : dict
- intersect : boolean, default True
+ # miscellaneous data creation
+ @staticmethod
+ def _extract_axes(self, data, axes, **kwargs):
+ """ return a list of the axis indicies """
+ return [ self._extract_axis(self, data, axis=i, **kwargs) for i, a in enumerate(axes) ]
+
+ @staticmethod
+ def _extract_axes_for_slice(self, axes):
+ """ return the slice dictionary for these axes """
+ return dict([ (self._AXIS_SLICEMAP[i], a) for i, a in zip(self._AXIS_ORDERS[self._AXIS_LEN-len(axes):],axes) ])
+
+ @staticmethod
+ def _prep_ndarray(self, values, copy=True):
+ if not isinstance(values, np.ndarray):
+ values = np.asarray(values)
+ # NumPy strings are a pain, convert to object
+ if issubclass(values.dtype.type, basestring):
+ values = np.array(values, dtype=object, copy=True)
+ else:
+ if copy:
+ values = values.copy()
+ assert(values.ndim == self._AXIS_LEN)
+ return values
- Returns
- -------
- dict of aligned frames, index, columns
- """
- result = {}
+ @staticmethod
+ def _homogenize_dict(self, frames, intersect=True, dtype=None):
+ """
+ Conform set of _constructor_sliced-like objects to either an intersection
+ of indices / columns or a union.
+
+ Parameters
+ ----------
+ frames : dict
+ intersect : boolean, default True
+
+ Returns
+ -------
+ dict of aligned results & indicies
+ """
+ result = {}
- adj_frames = {}
- for k, v in frames.iteritems():
- if isinstance(v, dict):
- adj_frames[k] = DataFrame(v)
- else:
- adj_frames[k] = v
+ adj_frames = {}
+ for k, v in frames.iteritems():
+ if isinstance(v, dict):
+ adj_frames[k] = self._constructor_sliced(v)
+ else:
+ adj_frames[k] = v
- index = _extract_axis(adj_frames, axis=0, intersect=intersect)
- columns = _extract_axis(adj_frames, axis=1, intersect=intersect)
+ axes = self._AXIS_ORDERS[1:]
+ axes_dict = dict([ (a,ax) for a,ax in zip(axes,self._extract_axes(self, adj_frames, axes, intersect=intersect)) ])
- for key, frame in adj_frames.iteritems():
- if frame is not None:
- result[key] = frame.reindex(index=index, columns=columns,
- copy=False)
- else:
- result[key] = None
+ reindex_dict = dict([ (self._AXIS_SLICEMAP[a],axes_dict[a]) for a in axes ])
+ reindex_dict['copy'] = False
+ for key, frame in adj_frames.iteritems():
+ if frame is not None:
+ result[key] = frame.reindex(**reindex_dict)
+ else:
+ result[key] = None
- return result, index, columns
+ axes_dict['data'] = result
+ return axes_dict
+ @staticmethod
+ def _extract_axis(self, data, axis=0, intersect=False):
-def _extract_axis(data, axis=0, intersect=False):
- if len(data) == 0:
- index = Index([])
- elif len(data) > 0:
- raw_lengths = []
- indexes = []
index = None
+ if len(data) == 0:
+ index = Index([])
+ elif len(data) > 0:
+ raw_lengths = []
+ indexes = []
+
have_raw_arrays = False
have_frames = False
for v in data.values():
- if isinstance(v, DataFrame):
+ if isinstance(v, self._constructor_sliced):
have_frames = True
indexes.append(v._get_axis(axis))
elif v is not None:
@@ -1459,7 +1499,7 @@ def _extract_axis(data, axis=0, intersect=False):
lengths = list(set(raw_lengths))
if len(lengths) > 1:
raise ValueError('ndarrays must match shape on axis %d' % axis)
-
+
if have_frames:
assert(lengths[0] == len(index))
else:
@@ -1468,7 +1508,10 @@ def _extract_axis(data, axis=0, intersect=False):
if index is None:
index = Index([])
- return _ensure_index(index)
+ return _ensure_index(index)
+
+WidePanel = Panel
+LongPanel = DataFrame
def _monotonic(arr):
diff --git a/pandas/core/panel4d.py b/pandas/core/panel4d.py
new file mode 100755
index 0000000000000..504111bef5414
--- /dev/null
+++ b/pandas/core/panel4d.py
@@ -0,0 +1,112 @@
+""" Panel4D: a 4-d dict like collection of panels """
+
+from pandas.core.panel import Panel
+import pandas.lib as lib
+
+
+class Panel4D(Panel):
+ _AXIS_ORDERS = ['labels','items','major_axis','minor_axis']
+ _AXIS_NUMBERS = dict([ (a,i) for i, a in enumerate(_AXIS_ORDERS) ])
+ _AXIS_ALIASES = {
+ 'major' : 'major_axis',
+ 'minor' : 'minor_axis'
+ }
+ _AXIS_NAMES = dict([ (i,a) for i, a in enumerate(_AXIS_ORDERS) ])
+ _AXIS_SLICEMAP = {
+ 'items' : 'items',
+ 'major_axis' : 'major_axis',
+ 'minor_axis' : 'minor_axis'
+ }
+ _AXIS_LEN = len(_AXIS_ORDERS)
+
+ # major
+ _default_stat_axis = 2
+
+ # info axis
+ _het_axis = 0
+ _info_axis = _AXIS_ORDERS[_het_axis]
+
+ labels = lib.AxisProperty(0)
+ items = lib.AxisProperty(1)
+ major_axis = lib.AxisProperty(2)
+ minor_axis = lib.AxisProperty(3)
+
+ _constructor_sliced = Panel
+
+ def __init__(self, data=None, labels=None, items=None, major_axis=None, minor_axis=None, copy=False, dtype=None):
+ """
+ Represents a 4 dimensonal structured
+
+ Parameters
+ ----------
+ data : ndarray (labels x items x major x minor), or dict of Panels
+
+ labels : Index or array-like : axis=0
+ items : Index or array-like : axis=1
+ major_axis : Index or array-like: axis=2
+ minor_axis : Index or array-like: axis=3
+
+ dtype : dtype, default None
+ Data type to force, otherwise infer
+ copy : boolean, default False
+ Copy data from inputs. Only affects DataFrame / 2d ndarray input
+ """
+ self._init_data( data=data, labels=labels, items=items, major_axis=major_axis, minor_axis=minor_axis,
+ copy=copy, dtype=dtype)
+
+ def _get_plane_axes(self, axis):
+ axis = self._get_axis_name(axis)
+
+ if axis == 'major_axis':
+ items = self.labels
+ major = self.items
+ minor = self.minor_axis
+ elif axis == 'minor_axis':
+ items = self.labels
+ major = self.items
+ minor = self.major_axis
+ elif axis == 'items':
+ items = self.labels
+ major = self.major_axis
+ minor = self.minor_axis
+ elif axis == 'labels':
+ items = self.items
+ major = self.major_axis
+ minor = self.minor_axis
+
+ return items, major, minor
+
+ def _combine(self, other, func, axis=0):
+ if isinstance(other, Panel4D):
+ return self._combine_panel4d(other, func)
+ return super(Panel4D, self)._combine(other, func, axis=axis)
+
+ def _combine_panel4d(self, other, func):
+ labels = self.labels + other.labels
+ items = self.items + other.items
+ major = self.major_axis + other.major_axis
+ minor = self.minor_axis + other.minor_axis
+
+ # could check that everything's the same size, but forget it
+ this = self.reindex(labels=labels, items=items, major=major, minor=minor)
+ other = other.reindex(labels=labels, items=items, major=major, minor=minor)
+
+ result_values = func(this.values, other.values)
+
+ return self._constructor(result_values, labels, items, major, minor)
+
+ def join(self, other, how='left', lsuffix='', rsuffix=''):
+ if isinstance(other, Panel4D):
+ join_major, join_minor = self._get_join_index(other, how)
+ this = self.reindex(major=join_major, minor=join_minor)
+ other = other.reindex(major=join_major, minor=join_minor)
+ merged_data = this._data.merge(other._data, lsuffix, rsuffix)
+ return self._constructor(merged_data)
+ return super(Panel4D, self).join(other=other,how=how,lsuffix=lsuffix,rsuffix=rsuffix)
+
+ ### remove operations ####
+ def to_frame(self, *args, **kwargs):
+ raise NotImplementedError
+ def to_excel(self, *args, **kwargs):
+ raise NotImplementedError
+
diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py
new file mode 100644
index 0000000000000..e4638750aa1b2
--- /dev/null
+++ b/pandas/core/panelnd.py
@@ -0,0 +1,122 @@
+""" Factory methods to create N-D panels """
+
+import pandas
+from pandas.core.panel import Panel
+import pandas.lib as lib
+
+def create_nd_panel_factory(klass_name, axis_orders, axis_slices, slicer, axis_aliases = None, stat_axis = 2):
+ """ manufacture a n-d class:
+
+ parameters
+ ----------
+ klass_name : the klass name
+ axis_orders : the names of the axes in order (highest to lowest)
+ axis_slices : a dictionary that defines how the axes map to the sliced axis
+ slicer : the class representing a slice of this panel
+ axis_aliases: a dictionary defining aliases for various axes
+ default = { major : major_axis, minor : minor_axis }
+ stat_axis : the default statistic axis
+ default = 2
+ het_axis : the info axis
+
+
+ returns
+ -------
+ a class object reprsenting this panel
+
+
+ """
+
+ # build the klass
+ klass = type(klass_name, (slicer,),{})
+
+ # add the class variables
+ klass._AXIS_ORDERS = axis_orders
+ klass._AXIS_NUMBERS = dict([ (a,i) for i, a in enumerate(axis_orders) ])
+ klass._AXIS_ALIASES = axis_aliases or dict()
+ klass._AXIS_NAMES = dict([ (i,a) for i, a in enumerate(axis_orders) ])
+ klass._AXIS_SLICEMAP = axis_slices
+ klass._AXIS_LEN = len(axis_orders)
+ klass._default_stat_axis = stat_axis
+ klass._het_axis = 0
+ klass._info_axis = axis_orders[klass._het_axis]
+ klass._constructor_sliced = slicer
+
+ # add the axes
+ for i, a in enumerate(axis_orders):
+ setattr(klass,a,lib.AxisProperty(i))
+
+ # define the __init__
+ def __init__(self, *args, **kwargs):
+ if not (kwargs.get('data') or len(args)):
+ raise Exception("must supply at least a data argument to [%s]" % klass_name)
+ if 'copy' not in kwargs:
+ kwargs['copy'] = False
+ if 'dtype' not in kwargs:
+ kwargs['dtype'] = None
+ self._init_data( *args, **kwargs)
+ klass.__init__ = __init__
+
+ # define _get_place_axes
+ def _get_plane_axes(self, axis):
+ axis = self._get_axis_name(axis)
+ index = self._AXIS_ORDERS.index(axis)
+
+ planes = []
+ if index:
+ planes.extend(self._AXIS_ORDERS[0:index])
+ if index != self._AXIS_LEN:
+ planes.extend(self._AXIS_ORDERS[index:])
+
+ return planes
+ klass._get_plane_axes
+
+ # remove these operations
+ def to_frame(self, *args, **kwargs):
+ raise NotImplementedError
+ klass.to_frame = to_frame
+ def to_excel(self, *args, **kwargs):
+ raise NotImplementedError
+ klass.to_excel = to_excel
+
+ return klass
+
+
+if __name__ == '__main__':
+
+ # create a sample
+ from pandas.util import testing
+ print pandas.__version__
+
+ # create a 4D
+ Panel4DNew = create_nd_panel_factory(
+ klass_name = 'Panel4DNew',
+ axis_orders = ['labels1','items1','major_axis','minor_axis'],
+ axis_slices = { 'items1' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = Panel,
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+ p4dn = Panel4DNew(dict(L1 = testing.makePanel(), L2 = testing.makePanel()))
+ print "creating a 4-D Panel"
+ print p4dn, "\n"
+
+ # create a 5D
+ Panel5DNew = create_nd_panel_factory(
+ klass_name = 'Panel5DNew',
+ axis_orders = [ 'cool1', 'labels1','items1','major_axis','minor_axis'],
+ axis_slices = { 'labels1' : 'labels1', 'items1' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = Panel4DNew,
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+ p5dn = Panel5DNew(dict(C1 = p4dn))
+
+ print "creating a 5-D Panel"
+ print p5dn, "\n"
+
+ print "Slicing p5dn"
+ print p5dn.ix['C1',:,:,0:3,:], "\n"
+
+ print "Transposing p5dn"
+ print p5dn.transpose(1,2,3,4,0), "\n"
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
old mode 100644
new mode 100755
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
new file mode 100755
index 0000000000000..40bd84044dbdd
--- /dev/null
+++ b/pandas/tests/test_panel4d.py
@@ -0,0 +1,1049 @@
+from datetime import datetime
+import os
+import operator
+import unittest
+import nose
+
+import numpy as np
+
+from pandas import DataFrame, Index, isnull, notnull, pivot, MultiIndex
+from pandas.core.datetools import bday
+from pandas.core.frame import group_agg
+from pandas.core.panel import Panel
+from pandas.core.panel4d import Panel4D
+from pandas.core.series import remove_na
+import pandas.core.common as com
+import pandas.core.panel as panelmod
+from pandas.util import py3compat
+from pandas.io.parsers import (ExcelFile, ExcelWriter)
+
+from pandas.util.testing import (assert_panel_equal,
+ assert_panel4d_equal,
+ assert_frame_equal,
+ assert_series_equal,
+ assert_almost_equal)
+import pandas.util.testing as tm
+
+def add_nans(panel4d):
+ for l, label in enumerate(panel4d.labels):
+ panel = panel4d[label]
+ tm.add_nans(panel)
+
+class SafeForLongAndSparse(object):
+
+ def test_repr(self):
+ foo = repr(self.panel4d)
+
+ def test_iter(self):
+ tm.equalContents(list(self.panel4d), self.panel4d.labels)
+
+ def test_count(self):
+ f = lambda s: notnull(s).sum()
+ self._check_stat_op('count', f, obj=self.panel4d, has_skipna=False)
+
+ def test_sum(self):
+ self._check_stat_op('sum', np.sum)
+
+ def test_mean(self):
+ self._check_stat_op('mean', np.mean)
+
+ def test_prod(self):
+ self._check_stat_op('prod', np.prod)
+
+ def test_median(self):
+ def wrapper(x):
+ if isnull(x).any():
+ return np.nan
+ return np.median(x)
+
+ self._check_stat_op('median', wrapper)
+
+ def test_min(self):
+ self._check_stat_op('min', np.min)
+
+ def test_max(self):
+ self._check_stat_op('max', np.max)
+
+ def test_skew(self):
+ from scipy.stats import skew
+ def this_skew(x):
+ if len(x) < 3:
+ return np.nan
+ return skew(x, bias=False)
+ self._check_stat_op('skew', this_skew)
+
+ # def test_mad(self):
+ # f = lambda x: np.abs(x - x.mean()).mean()
+ # self._check_stat_op('mad', f)
+
+ def test_var(self):
+ def alt(x):
+ if len(x) < 2:
+ return np.nan
+ return np.var(x, ddof=1)
+ self._check_stat_op('var', alt)
+
+ def test_std(self):
+ def alt(x):
+ if len(x) < 2:
+ return np.nan
+ return np.std(x, ddof=1)
+ self._check_stat_op('std', alt)
+
+ # def test_skew(self):
+ # from scipy.stats import skew
+
+ # def alt(x):
+ # if len(x) < 3:
+ # return np.nan
+ # return skew(x, bias=False)
+
+ # self._check_stat_op('skew', alt)
+
+ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True):
+ if obj is None:
+ obj = self.panel4d
+
+ # # set some NAs
+ # obj.ix[5:10] = np.nan
+ # obj.ix[15:20, -2:] = np.nan
+
+ f = getattr(obj, name)
+
+ if has_skipna:
+ def skipna_wrapper(x):
+ nona = remove_na(x)
+ if len(nona) == 0:
+ return np.nan
+ return alternative(nona)
+
+ def wrapper(x):
+ return alternative(np.asarray(x))
+
+ for i in range(obj.ndim):
+ result = f(axis=i, skipna=False)
+ assert_panel_equal(result, obj.apply(wrapper, axis=i))
+ else:
+ skipna_wrapper = alternative
+ wrapper = alternative
+
+ for i in range(obj.ndim):
+ result = f(axis=i)
+ assert_panel_equal(result, obj.apply(skipna_wrapper, axis=i))
+
+ self.assertRaises(Exception, f, axis=obj.ndim)
+
+class SafeForSparse(object):
+
+ @classmethod
+ def assert_panel_equal(cls, x, y):
+ assert_panel_equal(x, y)
+
+ @classmethod
+ def assert_panel4d_equal(cls, x, y):
+ assert_panel4d_equal(x, y)
+
+ def test_get_axis(self):
+ assert(self.panel4d._get_axis(0) is self.panel4d.labels)
+ assert(self.panel4d._get_axis(1) is self.panel4d.items)
+ assert(self.panel4d._get_axis(2) is self.panel4d.major_axis)
+ assert(self.panel4d._get_axis(3) is self.panel4d.minor_axis)
+
+ def test_set_axis(self):
+ new_labels = Index(np.arange(len(self.panel4d.labels)))
+ new_items = Index(np.arange(len(self.panel4d.items)))
+ new_major = Index(np.arange(len(self.panel4d.major_axis)))
+ new_minor = Index(np.arange(len(self.panel4d.minor_axis)))
+
+ # ensure propagate to potentially prior-cached items too
+ label = self.panel4d['l1']
+ self.panel4d.labels = new_labels
+
+ if hasattr(self.panel4d, '_item_cache'):
+ self.assert_('l1' not in self.panel4d._item_cache)
+ self.assert_(self.panel4d.labels is new_labels)
+
+ self.panel4d.major_axis = new_major
+ self.assert_(self.panel4d[0].major_axis is new_major)
+ self.assert_(self.panel4d.major_axis is new_major)
+
+ self.panel4d.minor_axis = new_minor
+ self.assert_(self.panel4d[0].minor_axis is new_minor)
+ self.assert_(self.panel4d.minor_axis is new_minor)
+
+ def test_get_axis_number(self):
+ self.assertEqual(self.panel4d._get_axis_number('labels'), 0)
+ self.assertEqual(self.panel4d._get_axis_number('items'), 1)
+ self.assertEqual(self.panel4d._get_axis_number('major'), 2)
+ self.assertEqual(self.panel4d._get_axis_number('minor'), 3)
+
+ def test_get_axis_name(self):
+ self.assertEqual(self.panel4d._get_axis_name(0), 'labels')
+ self.assertEqual(self.panel4d._get_axis_name(1), 'items')
+ self.assertEqual(self.panel4d._get_axis_name(2), 'major_axis')
+ self.assertEqual(self.panel4d._get_axis_name(3), 'minor_axis')
+
+ #def test_get_plane_axes(self):
+ # # what to do here?
+
+ # index, columns = self.panel._get_plane_axes('items')
+ # index, columns = self.panel._get_plane_axes('major_axis')
+ # index, columns = self.panel._get_plane_axes('minor_axis')
+ # index, columns = self.panel._get_plane_axes(0)
+
+ def test_truncate(self):
+ raise nose.SkipTest
+
+ #dates = self.panel.major_axis
+ #start, end = dates[1], dates[5]
+
+ #trunced = self.panel.truncate(start, end, axis='major')
+ #expected = self.panel['ItemA'].truncate(start, end)
+
+ #assert_frame_equal(trunced['ItemA'], expected)
+
+ #trunced = self.panel.truncate(before=start, axis='major')
+ #expected = self.panel['ItemA'].truncate(before=start)
+
+ #assert_frame_equal(trunced['ItemA'], expected)
+
+ #trunced = self.panel.truncate(after=end, axis='major')
+ #expected = self.panel['ItemA'].truncate(after=end)
+
+ #assert_frame_equal(trunced['ItemA'], expected)
+
+ # XXX test other axes
+
+ def test_arith(self):
+ self._test_op(self.panel4d, operator.add)
+ self._test_op(self.panel4d, operator.sub)
+ self._test_op(self.panel4d, operator.mul)
+ self._test_op(self.panel4d, operator.truediv)
+ self._test_op(self.panel4d, operator.floordiv)
+ self._test_op(self.panel4d, operator.pow)
+
+ self._test_op(self.panel4d, lambda x, y: y + x)
+ self._test_op(self.panel4d, lambda x, y: y - x)
+ self._test_op(self.panel4d, lambda x, y: y * x)
+ self._test_op(self.panel4d, lambda x, y: y / x)
+ self._test_op(self.panel4d, lambda x, y: y ** x)
+
+ self.assertRaises(Exception, self.panel4d.__add__, self.panel4d['l1'])
+
+ @staticmethod
+ def _test_op(panel4d, op):
+ result = op(panel4d, 1)
+ assert_panel_equal(result['l1'], op(panel4d['l1'], 1))
+
+ def test_keys(self):
+ tm.equalContents(self.panel4d.keys(), self.panel4d.labels)
+
+ def test_iteritems(self):
+ """Test panel4d.iteritems(), aka panel4d.iterkv()"""
+ # just test that it works
+ for k, v in self.panel4d.iterkv():
+ pass
+
+ self.assertEqual(len(list(self.panel4d.iterkv())),
+ len(self.panel4d.labels))
+
+ def test_combinePanel4d(self):
+ result = self.panel4d.add(self.panel4d)
+ self.assert_panel4d_equal(result, self.panel4d * 2)
+
+ def test_neg(self):
+ self.assert_panel4d_equal(-self.panel4d, self.panel4d * -1)
+
+ def test_select(self):
+ p = self.panel4d
+
+ # select labels
+ result = p.select(lambda x: x in ('l1', 'l3'), axis='labels')
+ expected = p.reindex(labels=['l1','l3'])
+ self.assert_panel4d_equal(result, expected)
+
+ # select items
+ result = p.select(lambda x: x in ('ItemA', 'ItemC'), axis='items')
+ expected = p.reindex(items=['ItemA', 'ItemC'])
+ self.assert_panel4d_equal(result, expected)
+
+ # select major_axis
+ result = p.select(lambda x: x >= datetime(2000, 1, 15), axis='major')
+ new_major = p.major_axis[p.major_axis >= datetime(2000, 1, 15)]
+ expected = p.reindex(major=new_major)
+ self.assert_panel4d_equal(result, expected)
+
+ # select minor_axis
+ result = p.select(lambda x: x in ('D', 'A'), axis=3)
+ expected = p.reindex(minor=['A', 'D'])
+ self.assert_panel4d_equal(result, expected)
+
+ # corner case, empty thing
+ result = p.select(lambda x: x in ('foo',), axis='items')
+ self.assert_panel4d_equal(result, p.reindex(items=[]))
+
+ def test_get_value(self):
+ for item in self.panel.items:
+ for mjr in self.panel.major_axis[::2]:
+ for mnr in self.panel.minor_axis:
+ result = self.panel.get_value(item, mjr, mnr)
+ expected = self.panel[item][mnr][mjr]
+ assert_almost_equal(result, expected)
+
+ def test_abs(self):
+ result = self.panel4d.abs()
+ expected = np.abs(self.panel4d)
+ self.assert_panel4d_equal(result, expected)
+
+ p = self.panel4d['l1']
+ result = p.abs()
+ expected = np.abs(p)
+ assert_panel_equal(result, expected)
+
+ df = p['ItemA']
+ result = df.abs()
+ expected = np.abs(df)
+ assert_frame_equal(result, expected)
+
+class CheckIndexing(object):
+
+
+ def test_getitem(self):
+ self.assertRaises(Exception, self.panel4d.__getitem__, 'ItemQ')
+
+ def test_delitem_and_pop(self):
+ expected = self.panel4d['l2']
+ result = self.panel4d.pop('l2')
+ assert_panel_equal(expected, result)
+ self.assert_('l2' not in self.panel4d.labels)
+
+ del self.panel4d['l3']
+ self.assert_('l3' not in self.panel4d.labels)
+ self.assertRaises(Exception, self.panel4d.__delitem__, 'l3')
+
+ values = np.empty((4, 4, 4, 4))
+ values[0] = 0
+ values[1] = 1
+ values[2] = 2
+ values[3] = 3
+
+ panel4d = Panel4D(values, range(4), range(4), range(4), range(4))
+
+ # did we delete the right row?
+
+ panel4dc = panel4d.copy()
+ del panel4dc[0]
+ assert_panel_equal(panel4dc[1], panel4d[1])
+ assert_panel_equal(panel4dc[2], panel4d[2])
+ assert_panel_equal(panel4dc[3], panel4d[3])
+
+ panel4dc = panel4d.copy()
+ del panel4dc[1]
+ assert_panel_equal(panel4dc[0], panel4d[0])
+ assert_panel_equal(panel4dc[2], panel4d[2])
+ assert_panel_equal(panel4dc[3], panel4d[3])
+
+ panel4dc = panel4d.copy()
+ del panel4dc[2]
+ assert_panel_equal(panel4dc[1], panel4d[1])
+ assert_panel_equal(panel4dc[0], panel4d[0])
+ assert_panel_equal(panel4dc[3], panel4d[3])
+
+ panel4dc = panel4d.copy()
+ del panel4dc[3]
+ assert_panel_equal(panel4dc[1], panel4d[1])
+ assert_panel_equal(panel4dc[2], panel4d[2])
+ assert_panel_equal(panel4dc[0], panel4d[0])
+
+ def test_setitem(self):
+ ## LongPanel with one item
+ #lp = self.panel.filter(['ItemA', 'ItemB']).to_frame()
+ #self.assertRaises(Exception, self.panel.__setitem__,
+ # 'ItemE', lp)
+
+ # Panel
+ p = Panel(dict(ItemA = self.panel4d['l1']['ItemA'][2:].filter(items=['A', 'B'])))
+ self.panel4d['l4'] = p
+ self.panel4d['l5'] = p
+
+ p2 = self.panel4d['l4']
+
+ assert_panel_equal(p, p2.reindex(items = p.items,
+ major_axis = p.major_axis,
+ minor_axis = p.minor_axis))
+
+ # scalar
+ self.panel4d['lG'] = 1
+ self.panel4d['lE'] = True
+ self.assert_(self.panel4d['lG'].values.dtype == np.int64)
+ self.assert_(self.panel4d['lE'].values.dtype == np.bool_)
+
+ # object dtype
+ self.panel4d['lQ'] = 'foo'
+ self.assert_(self.panel4d['lQ'].values.dtype == np.object_)
+
+ # boolean dtype
+ self.panel4d['lP'] = self.panel4d['l1'] > 0
+ self.assert_(self.panel4d['lP'].values.dtype == np.bool_)
+
+ def test_setitem_ndarray(self):
+ raise nose.SkipTest
+ # from pandas import DateRange, datetools
+
+ # timeidx = DateRange(start=datetime(2009,1,1),
+ # end=datetime(2009,12,31),
+ # offset=datetools.MonthEnd())
+ # lons_coarse = np.linspace(-177.5, 177.5, 72)
+ # lats_coarse = np.linspace(-87.5, 87.5, 36)
+ # P = Panel(items=timeidx, major_axis=lons_coarse, minor_axis=lats_coarse)
+ # data = np.random.randn(72*36).reshape((72,36))
+ # key = datetime(2009,2,28)
+ # P[key] = data#
+
+ # assert_almost_equal(P[key].values, data)
+
+ def test_major_xs(self):
+ ref = self.panel4d['l1']['ItemA']
+
+ idx = self.panel4d.major_axis[5]
+ xs = self.panel4d.major_xs(idx)
+
+ assert_series_equal(xs['l1'].T['ItemA'], ref.xs(idx))
+
+ # not contained
+ idx = self.panel4d.major_axis[0] - bday
+ self.assertRaises(Exception, self.panel4d.major_xs, idx)
+
+ def test_major_xs_mixed(self):
+ self.panel4d['l4'] = 'foo'
+ xs = self.panel4d.major_xs(self.panel4d.major_axis[0])
+ self.assert_(xs['l1']['A'].dtype == np.float64)
+ self.assert_(xs['l4']['A'].dtype == np.object_)
+
+ def test_minor_xs(self):
+ ref = self.panel4d['l1']['ItemA']
+
+ idx = self.panel4d.minor_axis[1]
+ xs = self.panel4d.minor_xs(idx)
+
+ assert_series_equal(xs['l1'].T['ItemA'], ref[idx])
+
+ # not contained
+ self.assertRaises(Exception, self.panel4d.minor_xs, 'E')
+
+ def test_minor_xs_mixed(self):
+ self.panel4d['l4'] = 'foo'
+
+ xs = self.panel4d.minor_xs('D')
+ self.assert_(xs['l1'].T['ItemA'].dtype == np.float64)
+ self.assert_(xs['l4'].T['ItemA'].dtype == np.object_)
+
+ def test_xs(self):
+ l1 = self.panel4d.xs('l1', axis=0)
+ expected = self.panel4d['l1']
+ assert_panel_equal(l1, expected)
+
+ # not view by default
+ l1.values[:] = np.nan
+ self.assert_(not np.isnan(self.panel4d['l1'].values).all())
+
+ # but can get view
+ l1_view = self.panel4d.xs('l1', axis=0, copy=False)
+ l1_view.values[:] = np.nan
+ self.assert_(np.isnan(self.panel4d['l1'].values).all())
+
+ # mixed-type
+ self.panel4d['strings'] = 'foo'
+ self.assertRaises(Exception, self.panel4d.xs, 'D', axis=2,
+ copy=False)
+
+ def test_getitem_fancy_labels(self):
+ panel4d = self.panel4d
+
+ labels = panel4d.labels[[1, 0]]
+ items = panel4d.items[[1, 0]]
+ dates = panel4d.major_axis[::2]
+ cols = ['D', 'C', 'F']
+
+ # all 4 specified
+ assert_panel4d_equal(panel4d.ix[labels, items, dates, cols],
+ panel4d.reindex(labels=labels, items=items, major=dates, minor=cols))
+
+ # 3 specified
+ assert_panel4d_equal(panel4d.ix[:, items, dates, cols],
+ panel4d.reindex(items=items, major=dates, minor=cols))
+
+ # 2 specified
+ assert_panel4d_equal(panel4d.ix[:, :, dates, cols],
+ panel4d.reindex(major=dates, minor=cols))
+
+ assert_panel4d_equal(panel4d.ix[:, items, :, cols],
+ panel4d.reindex(items=items, minor=cols))
+
+ assert_panel4d_equal(panel4d.ix[:, items, dates, :],
+ panel4d.reindex(items=items, major=dates))
+
+ # only 1
+ assert_panel4d_equal(panel4d.ix[:, items, :, :],
+ panel4d.reindex(items=items))
+
+ assert_panel4d_equal(panel4d.ix[:, :, dates, :],
+ panel4d.reindex(major=dates))
+
+ assert_panel4d_equal(panel4d.ix[:, :, :, cols],
+ panel4d.reindex(minor=cols))
+
+ def test_getitem_fancy_slice(self):
+ pass
+
+ def test_getitem_fancy_ints(self):
+ pass
+
+ def test_getitem_fancy_xs(self):
+ raise nose.SkipTest
+ #self.assertRaises(NotImplementedError, self.panel4d.major_xs)
+ #self.assertRaises(NotImplementedError, self.panel4d.minor_xs)
+
+ def test_getitem_fancy_xs_check_view(self):
+ raise nose.SkipTest
+ # item = 'ItemB'
+ # date = self.panel.major_axis[5]
+ # col = 'C'
+
+ # # make sure it's always a view
+ # NS = slice(None, None)
+
+ # # DataFrames
+ # comp = assert_frame_equal
+ # self._check_view(item, comp)
+ # self._check_view((item, NS), comp)
+ # self._check_view((item, NS, NS), comp)
+ # self._check_view((NS, date), comp)
+ # self._check_view((NS, date, NS), comp)
+ # self._check_view((NS, NS, 'C'), comp)
+
+ # # Series
+ # comp = assert_series_equal
+ # self._check_view((item, date), comp)
+ # self._check_view((item, date, NS), comp)
+ # self._check_view((item, NS, 'C'), comp)
+ # self._check_view((NS, date, 'C'), comp)#
+
+ #def _check_view(self, indexer, comp):
+ # cp = self.panel.copy()
+ # obj = cp.ix[indexer]
+ # obj.values[:] = 0
+ # self.assert_((obj.values == 0).all())
+ # comp(cp.ix[indexer].reindex_like(obj), obj)
+
+ def test_get_value(self):
+ for label in self.panel4d.labels:
+ for item in self.panel4d.items:
+ for mjr in self.panel4d.major_axis[::2]:
+ for mnr in self.panel4d.minor_axis:
+ result = self.panel4d.get_value(label, item, mjr, mnr)
+ expected = self.panel4d[label][item][mnr][mjr]
+ assert_almost_equal(result, expected)
+
+ def test_set_value(self):
+ for label in self.panel4d.labels:
+ for item in self.panel4d.items:
+ for mjr in self.panel4d.major_axis[::2]:
+ for mnr in self.panel4d.minor_axis:
+ self.panel4d.set_value(label, item, mjr, mnr, 1.)
+ assert_almost_equal(self.panel4d[label][item][mnr][mjr], 1.)
+
+ # resize
+ res = self.panel4d.set_value('l4', 'ItemE', 'foo', 'bar', 1.5)
+ self.assert_(isinstance(res, Panel4D))
+ self.assert_(res is not self.panel4d)
+ self.assertEqual(res.get_value('l4', 'ItemE', 'foo', 'bar'), 1.5)
+
+ res3 = self.panel4d.set_value('l4', 'ItemE', 'foobar', 'baz', 5)
+ self.assert_(com.is_float_dtype(res3['l4'].values))
+
+class TestPanel4d(unittest.TestCase, CheckIndexing, SafeForSparse, SafeForLongAndSparse):
+
+ @classmethod
+ def assert_panel4d_equal(cls,x, y):
+ assert_panel4d_equal(x, y)
+
+ def setUp(self):
+ self.panel4d = tm.makePanel4D()
+ add_nans(self.panel4d)
+
+ def test_constructor(self):
+ # with BlockManager
+ panel4d = Panel4D(self.panel4d._data)
+ self.assert_(panel4d._data is self.panel4d._data)
+
+ panel4d = Panel4D(self.panel4d._data, copy=True)
+ self.assert_(panel4d._data is not self.panel4d._data)
+ assert_panel4d_equal(panel4d, self.panel4d)
+
+ # strings handled prop
+ #panel4d = Panel4D([[['foo', 'foo', 'foo',],
+ # ['foo', 'foo', 'foo']]])
+ #self.assert_(wp.values.dtype == np.object_)
+
+ vals = self.panel4d.values
+
+ # no copy
+ panel4d = Panel4D(vals)
+ self.assert_(panel4d.values is vals)
+
+ # copy
+ panel4d = Panel4D(vals, copy=True)
+ self.assert_(panel4d.values is not vals)
+
+ def test_constructor_cast(self):
+ zero_filled = self.panel4d.fillna(0)
+
+ casted = Panel4D(zero_filled._data, dtype=int)
+ casted2 = Panel4D(zero_filled.values, dtype=int)
+
+ exp_values = zero_filled.values.astype(int)
+ assert_almost_equal(casted.values, exp_values)
+ assert_almost_equal(casted2.values, exp_values)
+
+ # can't cast
+ data = [[['foo', 'bar', 'baz']]]
+ self.assertRaises(ValueError, Panel, data, dtype=float)
+
+ def test_constructor_empty_panel(self):
+ empty = Panel()
+ self.assert_(len(empty.items) == 0)
+ self.assert_(len(empty.major_axis) == 0)
+ self.assert_(len(empty.minor_axis) == 0)
+
+ def test_constructor_observe_dtype(self):
+ # GH #411
+ panel = Panel(items=range(3), major_axis=range(3),
+ minor_axis=range(3), dtype='O')
+ self.assert_(panel.values.dtype == np.object_)
+
+ def test_consolidate(self):
+ self.assert_(self.panel4d._data.is_consolidated())
+
+ self.panel4d['foo'] = 1.
+ self.assert_(not self.panel4d._data.is_consolidated())
+
+ panel4d = self.panel4d.consolidate()
+ self.assert_(panel4d._data.is_consolidated())
+
+ def test_ctor_dict(self):
+ l1 = self.panel4d['l1']
+ l2 = self.panel4d['l2']
+
+ d = {'A' : l1, 'B' : l2.ix[['ItemB'],:,:] }
+ #d2 = {'A' : itema._series, 'B' : itemb[5:]._series}
+ #d3 = {'A' : DataFrame(itema._series),
+ # 'B' : DataFrame(itemb[5:]._series)}
+
+ panel4d = Panel4D(d)
+ #wp2 = Panel.from_dict(d2) # nested Dict
+ #wp3 = Panel.from_dict(d3)
+ #self.assert_(wp.major_axis.equals(self.panel.major_axis))
+ assert_panel_equal(panel4d['A'], self.panel4d['l1'])
+ assert_frame_equal(panel4d.ix['B','ItemB',:,:], self.panel4d.ix['l2',['ItemB'],:,:]['ItemB'])
+
+ # intersect
+ #wp = Panel.from_dict(d, intersect=True)
+ #self.assert_(wp.major_axis.equals(itemb.index[5:]))
+
+ # use constructor
+ #assert_panel_equal(Panel(d), Panel.from_dict(d))
+ #assert_panel_equal(Panel(d2), Panel.from_dict(d2))
+ #assert_panel_equal(Panel(d3), Panel.from_dict(d3))
+
+ # cast
+ #dcasted = dict((k, v.reindex(wp.major_axis).fillna(0))
+ # for k, v in d.iteritems())
+ #result = Panel(dcasted, dtype=int)
+ #expected = Panel(dict((k, v.astype(int))
+ # for k, v in dcasted.iteritems()))
+ #assert_panel_equal(result, expected)
+
+ def test_constructor_dict_mixed(self):
+ data = dict((k, v.values) for k, v in self.panel4d.iterkv())
+ result = Panel4D(data)
+ exp_major = Index(np.arange(len(self.panel4d.major_axis)))
+ self.assert_(result.major_axis.equals(exp_major))
+
+ result = Panel4D(data,
+ labels = self.panel4d.labels,
+ items = self.panel4d.items,
+ major_axis = self.panel4d.major_axis,
+ minor_axis = self.panel4d.minor_axis)
+ assert_panel4d_equal(result, self.panel4d)
+
+ data['l2'] = self.panel4d['l2']
+ result = Panel4D(data)
+ assert_panel4d_equal(result, self.panel4d)
+
+ # corner, blow up
+ data['l2'] = data['l2']['ItemB']
+ self.assertRaises(Exception, Panel4D, data)
+
+ data['l2'] = self.panel4d['l2'].values[:, :, :-1]
+ self.assertRaises(Exception, Panel4D, data)
+
+ def test_constructor_resize(self):
+ data = self.panel4d._data
+ labels= self.panel4d.labels[:-1]
+ items = self.panel4d.items[:-1]
+ major = self.panel4d.major_axis[:-1]
+ minor = self.panel4d.minor_axis[:-1]
+
+ result = Panel4D(data, labels=labels, items=items, major_axis=major, minor_axis=minor)
+ expected = self.panel4d.reindex(labels=labels, items=items, major=major, minor=minor)
+ assert_panel4d_equal(result, expected)
+
+ result = Panel4D(data, items=items, major_axis=major)
+ expected = self.panel4d.reindex(items=items, major=major)
+ assert_panel4d_equal(result, expected)
+
+ result = Panel4D(data, items=items)
+ expected = self.panel4d.reindex(items=items)
+ assert_panel4d_equal(result, expected)
+
+ result = Panel4D(data, minor_axis=minor)
+ expected = self.panel4d.reindex(minor=minor)
+ assert_panel4d_equal(result, expected)
+
+ def test_from_dict_mixed_orient(self):
+ raise nose.SkipTest
+ # df = tm.makeDataFrame()
+ # df['foo'] = 'bar'
+
+ # data = {'k1' : df,
+ # 'k2' : df}
+
+ # panel = Panel.from_dict(data, orient='minor')
+
+ # self.assert_(panel['foo'].values.dtype == np.object_)
+ # self.assert_(panel['A'].values.dtype == np.float64)
+
+ def test_values(self):
+ self.assertRaises(Exception, Panel, np.random.randn(5, 5, 5),
+ range(5), range(5), range(4))
+
+ def test_conform(self):
+ p = self.panel4d['l1'].filter(items=['ItemA', 'ItemB'])
+ conformed = self.panel4d.conform(p)
+
+ assert(conformed.items.equals(self.panel4d.labels))
+ assert(conformed.major_axis.equals(self.panel4d.major_axis))
+ assert(conformed.minor_axis.equals(self.panel4d.minor_axis))
+
+ def test_reindex(self):
+ ref = self.panel4d['l2']
+
+ # labels
+ result = self.panel4d.reindex(labels=['l1','l2'])
+ assert_panel_equal(result['l2'], ref)
+
+ # items
+ result = self.panel4d.reindex(items=['ItemA', 'ItemB'])
+ assert_frame_equal(result['l2']['ItemB'], ref['ItemB'])
+
+ # major
+ new_major = list(self.panel4d.major_axis[:10])
+ result = self.panel4d.reindex(major=new_major)
+ assert_frame_equal(result['l2']['ItemB'], ref['ItemB'].reindex(index=new_major))
+
+ # raise exception put both major and major_axis
+ self.assertRaises(Exception, self.panel4d.reindex,
+ major_axis=new_major, major=new_major)
+
+ # minor
+ new_minor = list(self.panel4d.minor_axis[:2])
+ result = self.panel4d.reindex(minor=new_minor)
+ assert_frame_equal(result['l2']['ItemB'], ref['ItemB'].reindex(columns=new_minor))
+
+ result = self.panel4d.reindex(labels=self.panel4d.labels,
+ items =self.panel4d.items,
+ major =self.panel4d.major_axis,
+ minor =self.panel4d.minor_axis)
+
+ assert(result.labels is self.panel4d.labels)
+ assert(result.items is self.panel4d.items)
+ assert(result.major_axis is self.panel4d.major_axis)
+ assert(result.minor_axis is self.panel4d.minor_axis)
+
+ self.assertRaises(Exception, self.panel4d.reindex)
+
+ # with filling
+ smaller_major = self.panel4d.major_axis[::5]
+ smaller = self.panel4d.reindex(major=smaller_major)
+
+ larger = smaller.reindex(major=self.panel4d.major_axis,
+ method='pad')
+
+ assert_panel_equal(larger.ix[:,:,self.panel4d.major_axis[1],:],
+ smaller.ix[:,:,smaller_major[0],:])
+
+ # don't necessarily copy
+ result = self.panel4d.reindex(major=self.panel4d.major_axis, copy=False)
+ self.assert_(result is self.panel4d)
+
+ def test_reindex_like(self):
+ # reindex_like
+ smaller = self.panel4d.reindex(labels=self.panel4d.labels[:-1],
+ items =self.panel4d.items[:-1],
+ major =self.panel4d.major_axis[:-1],
+ minor =self.panel4d.minor_axis[:-1])
+ smaller_like = self.panel4d.reindex_like(smaller)
+ assert_panel4d_equal(smaller, smaller_like)
+
+ def test_take(self):
+ raise nose.SkipTest
+
+ # # axis == 0
+ # result = self.panel.take([2, 0, 1], axis=0)
+ # expected = self.panel.reindex(items=['ItemC', 'ItemA', 'ItemB'])
+ # assert_panel_equal(result, expected)#
+
+ # # axis >= 1
+ # result = self.panel.take([3, 0, 1, 2], axis=2)
+ # expected = self.panel.reindex(minor=['D', 'A', 'B', 'C'])
+ # assert_panel_equal(result, expected)
+
+ # self.assertRaises(Exception, self.panel.take, [3, -1, 1, 2], axis=2)
+ # self.assertRaises(Exception, self.panel.take, [4, 0, 1, 2], axis=2)
+
+ def test_sort_index(self):
+ import random
+
+ rlabels= list(self.panel4d.labels)
+ ritems = list(self.panel4d.items)
+ rmajor = list(self.panel4d.major_axis)
+ rminor = list(self.panel4d.minor_axis)
+ random.shuffle(rlabels)
+ random.shuffle(ritems)
+ random.shuffle(rmajor)
+ random.shuffle(rminor)
+
+ random_order = self.panel4d.reindex(labels=rlabels)
+ sorted_panel4d = random_order.sort_index(axis=0)
+ assert_panel4d_equal(sorted_panel4d, self.panel4d)
+
+ # descending
+ #random_order = self.panel.reindex(items=ritems)
+ #sorted_panel = random_order.sort_index(axis=0, ascending=False)
+ #assert_panel_equal(sorted_panel,
+ # self.panel.reindex(items=self.panel.items[::-1]))
+
+ #random_order = self.panel.reindex(major=rmajor)
+ #sorted_panel = random_order.sort_index(axis=1)
+ #assert_panel_equal(sorted_panel, self.panel)
+
+ #random_order = self.panel.reindex(minor=rminor)
+ #sorted_panel = random_order.sort_index(axis=2)
+ #assert_panel_equal(sorted_panel, self.panel)
+
+ def test_fillna(self):
+ filled = self.panel4d.fillna(0)
+ self.assert_(np.isfinite(filled.values).all())
+
+ filled = self.panel4d.fillna(method='backfill')
+ assert_panel_equal(filled['l1'],
+ self.panel4d['l1'].fillna(method='backfill'))
+
+ panel4d = self.panel4d.copy()
+ panel4d['str'] = 'foo'
+
+ filled = panel4d.fillna(method='backfill')
+ assert_panel_equal(filled['l1'],
+ panel4d['l1'].fillna(method='backfill'))
+
+ empty = self.panel4d.reindex(labels=[])
+ filled = empty.fillna(0)
+ assert_panel4d_equal(filled, empty)
+
+ def test_swapaxes(self):
+ result = self.panel4d.swapaxes('labels','items')
+ self.assert_(result.items is self.panel4d.labels)
+
+ result = self.panel4d.swapaxes('labels','minor')
+ self.assert_(result.labels is self.panel4d.minor_axis)
+
+ result = self.panel4d.swapaxes('items', 'minor')
+ self.assert_(result.items is self.panel4d.minor_axis)
+
+ result = self.panel4d.swapaxes('items', 'major')
+ self.assert_(result.items is self.panel4d.major_axis)
+
+ result = self.panel4d.swapaxes('major', 'minor')
+ self.assert_(result.major_axis is self.panel4d.minor_axis)
+
+ # this should also work
+ result = self.panel4d.swapaxes(0, 1)
+ self.assert_(result.labels is self.panel4d.items)
+
+ # this should also work
+ self.assertRaises(Exception, self.panel4d.swapaxes, 'items', 'items')
+
+ def test_to_frame(self):
+ raise nose.SkipTest
+ # # filtered
+ # filtered = self.panel.to_frame()
+ # expected = self.panel.to_frame().dropna(how='any')
+ # assert_frame_equal(filtered, expected)
+
+ # # unfiltered
+ # unfiltered = self.panel.to_frame(filter_observations=False)
+ # assert_panel_equal(unfiltered.to_panel(), self.panel)
+
+ # # names
+ # self.assertEqual(unfiltered.index.names, ['major', 'minor'])
+
+ def test_to_frame_mixed(self):
+ raise nose.SkipTest
+ # panel = self.panel.fillna(0)
+ # panel['str'] = 'foo'
+ # panel['bool'] = panel['ItemA'] > 0
+
+ # lp = panel.to_frame()
+ # wp = lp.to_panel()
+ # self.assertEqual(wp['bool'].values.dtype, np.bool_)
+ # assert_frame_equal(wp['bool'], panel['bool'])
+
+ def test_filter(self):
+ pass
+
+ def test_apply(self):
+ pass
+
+ def test_compound(self):
+ raise nose.SkipTest
+ # compounded = self.panel.compound()
+
+ # assert_series_equal(compounded['ItemA'],
+ # (1 + self.panel['ItemA']).product(0) - 1)
+
+ def test_shift(self):
+ raise nose.SkipTest
+ # # major
+ # idx = self.panel.major_axis[0]
+ # idx_lag = self.panel.major_axis[1]
+
+ # shifted = self.panel.shift(1)
+
+ # assert_frame_equal(self.panel.major_xs(idx),
+ # shifted.major_xs(idx_lag))
+
+ # # minor
+ # idx = self.panel.minor_axis[0]
+ # idx_lag = self.panel.minor_axis[1]
+
+ # shifted = self.panel.shift(1, axis='minor')
+
+ # assert_frame_equal(self.panel.minor_xs(idx),
+ # shifted.minor_xs(idx_lag))
+
+ # self.assertRaises(Exception, self.panel.shift, 1, axis='items')
+
+ def test_multiindex_get(self):
+ raise nose.SkipTest
+ # ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b',2)],
+ # names=['first', 'second'])
+ # wp = Panel(np.random.random((4,5,5)),
+ # items=ind,
+ # major_axis=np.arange(5),
+ # minor_axis=np.arange(5))
+ # f1 = wp['a']
+ # f2 = wp.ix['a']
+ # assert_panel_equal(f1, f2)
+
+ # self.assert_((f1.items == [1, 2]).all())
+ # self.assert_((f2.items == [1, 2]).all())
+
+ # ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1)],
+ # names=['first', 'second'])
+
+ def test_multiindex_blocks(self):
+ raise nose.SkipTest
+ # ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1)],
+ # names=['first', 'second'])
+ # wp = Panel(self.panel._data)
+ # wp.items = ind
+ # f1 = wp['a']
+ # self.assert_((f1.items == [1, 2]).all())
+
+ # f1 = wp[('b',1)]
+ # self.assert_((f1.columns == ['A', 'B', 'C', 'D']).all())
+
+ def test_repr_empty(self):
+ empty = Panel4D()
+ repr(empty)
+
+ def test_rename(self):
+ mapper = {
+ 'l1' : 'foo',
+ 'l2' : 'bar',
+ 'l3' : 'baz'
+ }
+
+ renamed = self.panel4d.rename_axis(mapper, axis=0)
+ exp = Index(['foo', 'bar', 'baz'])
+ self.assert_(renamed.labels.equals(exp))
+
+ renamed = self.panel4d.rename_axis(str.lower, axis=3)
+ exp = Index(['a', 'b', 'c', 'd'])
+ self.assert_(renamed.minor_axis.equals(exp))
+
+ # don't copy
+ renamed_nocopy = self.panel4d.rename_axis(mapper, axis=0, copy=False)
+ renamed_nocopy['foo'] = 3.
+ self.assert_((self.panel4d['l1'].values == 3).all())
+
+ def test_get_attr(self):
+ assert_panel_equal(self.panel4d['l1'], self.panel4d.l1)
+
+ def test_group_agg(self):
+ values = np.ones((10, 2)) * np.arange(10).reshape((10, 1))
+ bounds = np.arange(5) * 2
+ f = lambda x: x.mean(axis=0)
+
+ agged = group_agg(values, bounds, f)
+
+ assert(agged[1][0] == 2.5)
+ assert(agged[2][0] == 4.5)
+
+ # test a function that doesn't aggregate
+ f2 = lambda x: np.zeros((2,2))
+ self.assertRaises(Exception, group_agg, values, bounds, f2)
+
+ def test_from_frame_level1_unsorted(self):
+ raise nose.SkipTest
+ # tuples = [('MSFT', 3), ('MSFT', 2), ('AAPL', 2),
+ # ('AAPL', 1), ('MSFT', 1)]
+ # midx = MultiIndex.from_tuples(tuples)
+ # df = DataFrame(np.random.rand(5,4), index=midx)
+ # p = df.to_panel()
+ # assert_frame_equal(p.minor_xs(2), df.ix[:,2].sort_index())
+
+ def test_to_excel(self):
+ raise nose.SkipTest
+ # try:
+ # import xlwt
+ # import xlrd
+ # import openpyxl
+ # except ImportError:
+ # raise nose.SkipTest
+
+ # for ext in ['xls', 'xlsx']:
+ # path = '__tmp__.' + ext
+ # self.panel.to_excel(path)
+ # reader = ExcelFile(path)
+ # for item, df in self.panel.iteritems():
+ # recdf = reader.parse(str(item),index_col=0)
+ # assert_frame_equal(df, recdf)
+ # os.remove(path)
+
+
+if __name__ == '__main__':
+ import nose
+ nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tests/test_panelnd.py b/pandas/tests/test_panelnd.py
new file mode 100644
index 0000000000000..0d8a8c2023014
--- /dev/null
+++ b/pandas/tests/test_panelnd.py
@@ -0,0 +1,75 @@
+from datetime import datetime
+import os
+import operator
+import unittest
+import nose
+
+import numpy as np
+
+from pandas.core import panelnd
+from pandas.core.panel import Panel
+import pandas.core.common as com
+from pandas.util import py3compat
+
+from pandas.util.testing import (assert_panel_equal,
+ assert_panel4d_equal,
+ assert_frame_equal,
+ assert_series_equal,
+ assert_almost_equal)
+import pandas.util.testing as tm
+
+class TestPanelnd(unittest.TestCase):
+
+ def setUp(self):
+ pass
+
+ def test_4d_construction(self):
+
+ # create a 4D
+ Panel4D = panelnd.create_nd_panel_factory(
+ klass_name = 'Panel4D',
+ axis_orders = ['labels','items','major_axis','minor_axis'],
+ axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = Panel,
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+ p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
+
+ def test_5d_construction(self):
+
+ # create a 4D
+ Panel4D = panelnd.create_nd_panel_factory(
+ klass_name = 'Panel4D',
+ axis_orders = ['labels1','items','major_axis','minor_axis'],
+ axis_slices = { 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = Panel,
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+ p4d = Panel4D(dict(L1 = tm.makePanel(), L2 = tm.makePanel()))
+
+ # create a 5D
+ Panel5D = panelnd.create_nd_panel_factory(
+ klass_name = 'Panel5D',
+ axis_orders = [ 'cool1', 'labels1','items','major_axis','minor_axis'],
+ axis_slices = { 'labels1' : 'labels1', 'items' : 'items', 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
+ slicer = Panel4D,
+ axis_aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
+ stat_axis = 2)
+
+ p5d = Panel5D(dict(C1 = p4d))
+
+ # slice back to 4d
+ results = p5d.ix['C1',:,:,0:3,:]
+ expected = p4d.ix[:,:,0:3,:]
+ assert_panel_equal(results['L1'], expected['L1'])
+
+ # test a transpose
+ #results = p5d.transpose(1,2,3,4,0)
+ #expected =
+
+if __name__ == '__main__':
+ import nose
+ nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
old mode 100644
new mode 100755
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 3ee53a8c1b5da..aa692f4844c49 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -19,6 +19,7 @@
import pandas.core.series as series
import pandas.core.frame as frame
import pandas.core.panel as panel
+import pandas.core.panel4d as panel4d
from pandas import bdate_range
from pandas.tseries.index import DatetimeIndex
@@ -29,6 +30,7 @@
Series = series.Series
DataFrame = frame.DataFrame
Panel = panel.Panel
+Panel4D = panel4d.Panel4D
N = 30
K = 4
@@ -198,6 +200,18 @@ def assert_panel_equal(left, right, check_panel_type=False):
for col in right:
assert(col in left)
+def assert_panel4d_equal(left, right):
+ assert(left.labels.equals(right.labels))
+ assert(left.items.equals(right.items))
+ assert(left.major_axis.equals(right.major_axis))
+ assert(left.minor_axis.equals(right.minor_axis))
+
+ for col, series in left.iterkv():
+ assert(col in right)
+ assert_panel_equal(series, right[col])
+
+ for col in right:
+ assert(col in left)
def assert_contains_all(iterable, dic):
for k in iterable:
@@ -316,6 +330,8 @@ def makePanel():
data = dict((c, makeTimeDataFrame()) for c in cols)
return Panel.fromDict(data)
+def makePanel4D():
+ return Panel4D(dict(l1 = makePanel(), l2 = makePanel(), l3 = makePanel()))
def add_nans(panel):
I, J, N = panel.shape
@@ -324,6 +340,10 @@ def add_nans(panel):
for j, col in enumerate(dm.columns):
dm[col][:i + j] = np.NaN
+def add_nans_panel4d(panel4d):
+ for l, label in enumerate(panel4d.labels):
+ panel = panel4d[label]
+ add_nans(panel)
class TestSubDict(dict):
def __init__(self, *args, **kwargs):
| ## Panel4D
( note - this superseeds the prior branch, FDPanel and this is 0.9rc1 compatible)
Panel4D is like a Panel object, but provides 4 dimensions
labels, items, major_axis, minor_axis
instead of using a dict of Panels to hold data, the Panel4D provides a convenient represenation in pandas space with named dimensions to allow easy axis swapping and slicing
# testing
tests/test_panel4d.py provides a similar methodology to test_panel.py
Panel4D required an overhall of many methods in panel.py and one change in core/index.py (regarding multi-indexing)
almost all methods in a Panel are extended to Panel4D (with the exception in that Panel
now allows a multi-axis on axis 0)
docstrings need to be refreshed a bit and made a bit more general
all tests that are not skipped pass (tested with 0.9rc1)
join is a work in progress
# further
panelnd.py provides a factory function for creation of generic panel-like ND structures with custom named dimensions
(this works, but not fully tested - examples are in the docstring)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2242 | 2012-11-14T04:29:16Z | 2012-12-02T23:03:23Z | 2012-12-02T23:03:23Z | 2014-06-12T10:20:37Z |
Ignore terminal width when not running in an interactive shell | diff --git a/pandas/core/common.py b/pandas/core/common.py
index f9af37872d1b7..936f1d6357e4e 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1073,6 +1073,14 @@ def _concat_compat(to_concat, axis=0):
else:
return np.concatenate(to_concat, axis=axis)
+def in_interactive_session():
+ """ check if we're running in an interactive shell
+
+ returns True if running under python/ipython interactive shell
+ """
+ import __main__ as main
+ return not hasattr(main, '__file__')
+
# Unicode consolidation
# ---------------------
#
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4b826b98f4d18..2d15050b33906 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -581,13 +581,15 @@ def _need_info_repr_(self):
else:
# save us
if (len(self.index) > max_rows or
- len(self.columns) > terminal_width // 2):
+ (com.in_interactive_session() and
+ len(self.columns) > terminal_width // 2)):
return True
else:
buf = StringIO()
self.to_string(buf=buf)
value = buf.getvalue()
- if max([len(l) for l in value.split('\n')]) > terminal_width:
+ if (max([len(l) for l in value.split('\n')]) > terminal_width and
+ com.in_interactive_session()):
return True
else:
return False
| shell use with python/ipython is unaffected, but redirecting to a file
gives you all the columns unles print_config.max_columns is set.
#1610.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2241 | 2012-11-14T00:29:59Z | 2012-11-14T17:29:58Z | null | 2014-07-04T07:51:38Z |
BUG: Incorrect error message due to zero based levels. #2226 | diff --git a/pandas/core/index.py b/pandas/core/index.py
index 1e4d6347aaeec..9638da8f418cf 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1364,9 +1364,10 @@ def _get_level_number(self, level):
raise Exception('Level %s not found' % str(level))
elif level < 0:
level += self.nlevels
+ # Note: levels are zero-based
elif level >= self.nlevels:
raise ValueError('Index has only %d levels, not %d'
- % (self.nlevels, level))
+ % (self.nlevels, level + 1))
return level
_tuples = None
| Simple fix in error message that assume levels starting at 1 instead of 0.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2231 | 2012-11-12T00:44:52Z | 2012-11-12T04:37:37Z | 2012-11-12T04:37:37Z | 2012-11-12T04:37:41Z |
use boolean indexing via getitem to trigger masking; add inplace keyword to where | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
old mode 100644
new mode 100755
index 31c1a09f409c3..c9184f148e5a9
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1775,9 +1775,8 @@ def __getitem__(self, key):
elif isinstance(self.columns, MultiIndex):
return self._getitem_multilevel(key)
elif isinstance(key, DataFrame):
- values = key.values
- if values.dtype == bool:
- return self.values[values]
+ if key.values.dtype == bool:
+ return self.where(key)
else:
raise ValueError('Cannot index using non-boolean DataFrame')
else:
@@ -1871,11 +1870,6 @@ def __setitem__(self, key, value):
# support boolean setting with DataFrame input, e.g.
# df[df > df2] = 0
if isinstance(key, DataFrame):
- if not (key.index.equals(self.index) and
- key.columns.equals(self.columns)):
- raise PandasError('Can only index with like-indexed '
- 'DataFrame objects')
-
self._boolean_set(key, value)
elif isinstance(key, (np.ndarray, list)):
return self._set_item_multiple(key, value)
@@ -1884,18 +1878,13 @@ def __setitem__(self, key, value):
self._set_item(key, value)
def _boolean_set(self, key, value):
- mask = key.values
- if mask.dtype != np.bool_:
+ if key.values.dtype != np.bool_:
raise ValueError('Must pass DataFrame with boolean values only')
if self._is_mixed_type:
raise ValueError('Cannot do boolean setting on mixed-type frame')
- if isinstance(value, DataFrame):
- assert(value._indexed_same(self))
- np.putmask(self.values, mask, value.values)
- else:
- self.values[mask] = value
+ self.where(key, value, inplace=True)
def _set_item_multiple(self, keys, value):
if isinstance(value, DataFrame):
@@ -4878,7 +4867,7 @@ def combineMult(self, other):
"""
return self.mul(other, fill_value=1.)
- def where(self, cond, other):
+ def where(self, cond, other=NA, inplace=False):
"""
Return a DataFrame with the same shape as self and whose corresponding
entries are from self where cond is True and otherwise are from other.
@@ -4893,6 +4882,9 @@ def where(self, cond, other):
-------
wh: DataFrame
"""
+ if not hasattr(cond,'shape'):
+ raise ValueError('where requires an ndarray like object for its condition')
+
if isinstance(cond, np.ndarray):
if cond.shape != self.shape:
raise ValueError('Array onditional must be same shape as self')
@@ -4905,13 +4897,17 @@ def where(self, cond, other):
if isinstance(other, DataFrame):
_, other = self.align(other, join='left', fill_value=NA)
+ if inplace:
+ np.putmask(self.values, cond, other)
+ return self
+
rs = np.where(cond, self, other)
return self._constructor(rs, self.index, self.columns)
-
+
def mask(self, cond):
"""
Returns copy of self whose values are replaced with nan if the
- corresponding entry in cond is False
+ inverted condition is True
Parameters
----------
@@ -4921,7 +4917,7 @@ def mask(self, cond):
-------
wh: DataFrame
"""
- return self.where(cond, NA)
+ return self.where(~cond, NA)
_EMPTY_SERIES = Series([])
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
old mode 100644
new mode 100755
index 0b36e8d39a00a..dcc7bcb909cd4
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -141,6 +141,12 @@ def test_getitem_boolean(self):
self.assertRaises(ValueError, self.tsframe.__getitem__, self.tsframe)
+ # test df[df >0] works
+ bif = self.tsframe[self.tsframe > 0]
+ bifw = DataFrame(np.where(self.tsframe>0,self.tsframe,np.nan),index=self.tsframe.index,columns=self.tsframe.columns)
+ self.assert_(isinstance(bif,DataFrame))
+ self.assert_(bif.shape == self.tsframe.shape)
+ assert_frame_equal(bif,bifw)
def test_getitem_boolean_list(self):
df = DataFrame(np.arange(12).reshape(3,4))
@@ -278,7 +284,11 @@ def test_setitem_boolean(self):
values[values == 5] = 0
assert_almost_equal(df.values, values)
- self.assertRaises(Exception, df.__setitem__, df[:-1] > 0, 2)
+ # a df that needs alignment first
+ df[df[:-1]<0] = 2
+ np.putmask(values[:-1],values[:-1]<0,2)
+ assert_almost_equal(df.values, values)
+
self.assertRaises(Exception, df.__setitem__, df * 0, 2)
# index with DataFrame
@@ -5204,14 +5214,24 @@ def test_where(self):
for k, v in rs.iteritems():
assert_series_equal(v, np.where(cond[k], df[k], other5))
- assert_frame_equal(rs, df.mask(cond))
-
err1 = (df + 1).values[0:2, :]
self.assertRaises(ValueError, df.where, cond, err1)
err2 = cond.ix[:2, :].values
self.assertRaises(ValueError, df.where, err2, other1)
+ # invalid conditions
+ self.assertRaises(ValueError, df.mask, True)
+ self.assertRaises(ValueError, df.mask, 0)
+
+ def test_mask(self):
+ df = DataFrame(np.random.randn(5, 3))
+ cond = df > 0
+
+ rs = df.where(cond, np.nan)
+ assert_frame_equal(rs, df.mask(df <= 0))
+ assert_frame_equal(rs, df.mask(~cond))
+
#----------------------------------------------------------------------
# Transposing
| in core/frame.py
changed method _getitem_ to use _mask_ directly (e.g. df.mask(df > 0) is equivalent semantically to df[df>0])
this would be a small API change as before df[df >0] returned a boolean np array
added inplace keyword to _where_ method (to update the dataframe in place, default is NOT to use inplace, and return a new dataframe)
changed method _boolean_set_ to use where and inplace=True (this allows alignment of the passed values and is slightly less strict than the current method)
all tests pass (as well as an added test in boolean frame indexing)
if included in 0.9.1 would be great (sorry for the late addition)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2230 | 2012-11-11T21:19:09Z | 2012-11-13T22:52:23Z | 2012-11-13T22:52:23Z | 2014-06-12T19:22:25Z |
Unicode : change df.to_string() and friends to always return unicode objects | diff --git a/pandas/core/format.py b/pandas/core/format.py
index 13e504a8e1f88..f2999c63db38e 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -36,7 +36,7 @@
string representation of NAN to use, default 'NaN'
formatters : list or dict of one-parameter functions, optional
formatter functions to apply to columns' elements by position or name,
- default None
+ default None, if the result is a string , it must be a unicode string.
float_format : one-parameter function, optional
formatter function to apply to columns' elements if they are floats
default None
@@ -62,7 +62,7 @@ class SeriesFormatter(object):
def __init__(self, series, buf=None, header=True, length=True,
na_rep='NaN', name=False, float_format=None):
self.series = series
- self.buf = buf if buf is not None else StringIO()
+ self.buf = buf if buf is not None else StringIO(u"")
self.name = name
self.na_rep = na_rep
self.length = length
@@ -112,7 +112,7 @@ def to_string(self):
series = self.series
if len(series) == 0:
- return ''
+ return u''
fmt_index, have_header = self._get_formatted_index()
fmt_values = self._get_formatted_values()
@@ -135,9 +135,7 @@ def to_string(self):
if footer:
result.append(footer)
- if py3compat.PY3:
- return unicode(u'\n'.join(result))
- return com.console_encode(u'\n'.join(result))
+ return unicode(u'\n'.join(result))
if py3compat.PY3: # pragma: no cover
_encode_diff = lambda x: 0
@@ -200,10 +198,15 @@ def __init__(self, frame, buf=None, columns=None, col_space=None,
else:
self.columns = frame.columns
- def _to_str_columns(self, force_unicode=False):
+ def _to_str_columns(self, force_unicode=None):
"""
Render a DataFrame to a list of columns (as lists of strings).
"""
+ import warnings
+ if force_unicode is not None: # pragma: no cover
+ warnings.warn("force_unicode is deprecated, it will have no effect",
+ FutureWarning)
+
# may include levels names also
str_index = self._get_formatted_index()
str_columns = self._get_formatted_column_labels()
@@ -237,32 +240,17 @@ def _to_str_columns(self, force_unicode=False):
if self.index:
strcols.insert(0, str_index)
- if not py3compat.PY3:
- if force_unicode:
- def make_unicode(x):
- if isinstance(x, unicode):
- return x
- return x.decode('utf-8')
- strcols = map(lambda col: map(make_unicode, col), strcols)
- else:
- # Generally everything is plain strings, which has ascii
- # encoding. Problem is when there is a char with value over
- # 127. Everything then gets converted to unicode.
- try:
- map(lambda col: map(str, col), strcols)
- except UnicodeError:
- def make_unicode(x):
- if isinstance(x, unicode):
- return x
- return x.decode('utf-8')
- strcols = map(lambda col: map(make_unicode, col), strcols)
-
return strcols
- def to_string(self, force_unicode=False):
+ def to_string(self, force_unicode=None):
"""
Render a DataFrame to a console-friendly tabular output.
"""
+ import warnings
+ if force_unicode is not None: # pragma: no cover
+ warnings.warn("force_unicode is deprecated, it will have no effect",
+ FutureWarning)
+
frame = self.frame
if len(frame.columns) == 0 or len(frame.index) == 0:
@@ -272,15 +260,20 @@ def to_string(self, force_unicode=False):
com.pprint_thing(frame.index)))
text = info_line
else:
- strcols = self._to_str_columns(force_unicode)
+ strcols = self._to_str_columns()
text = adjoin(1, *strcols)
self.buf.writelines(text)
- def to_latex(self, force_unicode=False, column_format=None):
+ def to_latex(self, force_unicode=None, column_format=None):
"""
Render a DataFrame to a LaTeX tabular environment output.
"""
+ import warnings
+ if force_unicode is not None: # pragma: no cover
+ warnings.warn("force_unicode is deprecated, it will have no effect",
+ FutureWarning)
+
frame = self.frame
if len(frame.columns) == 0 or len(frame.index) == 0:
@@ -289,7 +282,7 @@ def to_latex(self, force_unicode=False, column_format=None):
frame.columns, frame.index))
strcols = [[info_line]]
else:
- strcols = self._to_str_columns(force_unicode)
+ strcols = self._to_str_columns()
if column_format is None:
column_format = '|l|%s|' % '|'.join('c' for _ in strcols)
@@ -726,18 +719,10 @@ def __init__(self, values, digits=7, formatter=None, na_rep='NaN',
self.justify = justify
def get_result(self):
- if self._have_unicode():
- fmt_values = self._format_strings(use_unicode=True)
- else:
- fmt_values = self._format_strings(use_unicode=False)
-
+ fmt_values = self._format_strings()
return _make_fixed_width(fmt_values, self.justify)
- def _have_unicode(self):
- mask = lib.map_infer(self.values, lambda x: isinstance(x, unicode))
- return mask.any()
-
- def _format_strings(self, use_unicode=False):
+ def _format_strings(self):
if self.float_format is None:
float_format = print_config.float_format
if float_format is None:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f7f296e822e15..a160c994e94a9 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -612,20 +612,51 @@ def _need_info_repr_(self):
else:
return False
- def __repr__(self):
+ def __str__(self):
+ """
+ Return a string representation for a particular DataFrame
+
+ Invoked by str(df) in both py2/py3.
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+
+ if py3compat.PY3:
+ return self.__unicode__()
+ return self.__bytes__()
+
+ def __bytes__(self):
"""
Return a string representation for a particular DataFrame
+
+ Invoked by bytes(df) in py3 only.
+ Yields a bytestring in both py2/py3.
+ """
+ return com.console_encode(self.__unicode__())
+
+ def __unicode__(self):
+ """
+ Return a string representation for a particular DataFrame
+
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
"""
- buf = StringIO()
+ buf = StringIO(u"")
if self._need_info_repr_():
self.info(buf=buf, verbose=self._verbose_info)
else:
self.to_string(buf=buf)
+
value = buf.getvalue()
+ assert type(value) == unicode
- if py3compat.PY3:
- return unicode(value)
- return com.console_encode(value)
+ return value
+
+ def __repr__(self):
+ """
+ Return a string representation for a particular DataFrame
+
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+ return str(self)
def _repr_html_(self):
"""
@@ -1379,19 +1410,21 @@ def to_excel(self, excel_writer, sheet_name='sheet1', na_rep='',
def to_string(self, buf=None, columns=None, col_space=None, colSpace=None,
header=True, index=True, na_rep='NaN', formatters=None,
float_format=None, sparsify=None, nanRep=None,
- index_names=True, justify=None, force_unicode=False):
+ index_names=True, justify=None, force_unicode=None):
"""
Render a DataFrame to a console-friendly tabular output.
"""
+ import warnings
+ if force_unicode is not None: # pragma: no cover
+ warnings.warn("force_unicode is deprecated, it will have no effect",
+ FutureWarning)
if nanRep is not None: # pragma: no cover
- import warnings
warnings.warn("nanRep is deprecated, use na_rep",
FutureWarning)
na_rep = nanRep
if colSpace is not None: # pragma: no cover
- import warnings
warnings.warn("colSpace is deprecated, use col_space",
FutureWarning)
col_space = colSpace
@@ -1404,15 +1437,10 @@ def to_string(self, buf=None, columns=None, col_space=None, colSpace=None,
justify=justify,
index_names=index_names,
header=header, index=index)
- formatter.to_string(force_unicode=force_unicode)
+ formatter.to_string()
if buf is None:
result = formatter.buf.getvalue()
- if not force_unicode:
- try:
- result = str(result)
- except ValueError:
- pass
return result
@Appender(fmt.docstring_to_string, indents=1)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index b7792309f66ff..133449d79d521 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -132,12 +132,48 @@ def __array_finalize__(self, obj):
def _shallow_copy(self):
return self.view()
- def __repr__(self):
+ def __str__(self):
+ """
+ Return a string representation for a particular Index
+
+ Invoked by str(df) in both py2/py3.
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+
if py3compat.PY3:
- prepr = com.pprint_thing(self)
+ return self.__unicode__()
+ return self.__bytes__()
+
+ def __bytes__(self):
+ """
+ Return a string representation for a particular Index
+
+ Invoked by bytes(df) in py3 only.
+ Yields a bytestring in both py2/py3.
+ """
+ return com.console_encode(self.__unicode__())
+
+ def __unicode__(self):
+ """
+ Return a string representation for a particular Index
+
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
+ """
+ if len(self) > 6 and len(self) > np.get_printoptions()['threshold']:
+ data = self[:3].tolist() + ["..."] + self[-3:].tolist()
else:
- prepr = com.pprint_thing_encoded(self)
- return 'Index(%s, dtype=%s)' % (prepr, self.dtype)
+ data = self
+
+ prepr = com.pprint_thing(data)
+ return '%s(%s, dtype=%s)' % (type(self).__name__, prepr, self.dtype)
+
+ def __repr__(self):
+ """
+ Return a string representation for a particular Index
+
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+ return str(self)
def astype(self, dtype):
return Index(self.values.astype(dtype), name=self.name,
@@ -207,15 +243,6 @@ def summary(self, name=None):
name = type(self).__name__
return '%s: %s entries%s' % (name, len(self), index_summary)
- def __str__(self):
- try:
- return np.array_repr(self.values)
- except UnicodeError:
- converted = u','.join(com.pprint_thing(x) for x in self.values)
- result = u'%s([%s], dtype=''%s'')' % (type(self).__name__, converted,
- str(self.values.dtype))
- return com.console_encode(result)
-
def _mpl_repr(self):
# how to represent ourselves to matplotlib
return self.values
@@ -394,8 +421,8 @@ def format(self, name=False):
result = []
for dt in self:
if dt.time() != zero_time or dt.tzinfo is not None:
- return header + ['%s' % x for x in self]
- result.append('%d-%.2d-%.2d' % (dt.year, dt.month, dt.day))
+ return header + [u'%s' % x for x in self]
+ result.append(u'%d-%.2d-%.2d' % (dt.year, dt.month, dt.day))
return header + result
values = self.values
@@ -1319,7 +1346,33 @@ def _array_values(self):
def dtype(self):
return np.dtype('O')
- def __repr__(self):
+ def __str__(self):
+ """
+ Return a string representation for a particular Index
+
+ Invoked by str(df) in both py2/py3.
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+
+ if py3compat.PY3:
+ return self.__unicode__()
+ return self.__bytes__()
+
+ def __bytes__(self):
+ """
+ Return a string representation for a particular Index
+
+ Invoked by bytes(df) in py3 only.
+ Yields a bytestring in both py2/py3.
+ """
+ return com.console_encode(self.__unicode__())
+
+ def __unicode__(self):
+ """
+ Return a string representation for a particular Index
+
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
+ """
output = 'MultiIndex\n%s'
options = np.get_printoptions()
@@ -1335,10 +1388,15 @@ def __repr__(self):
np.set_printoptions(threshold=options['threshold'])
- if py3compat.PY3:
- return output % summary
- else:
- return com.console_encode(output % summary)
+ return output % summary
+
+ def __repr__(self):
+ """
+ Return a string representation for a particular Index
+
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+ return str(self)
def __len__(self):
return len(self.labels[0])
@@ -1496,7 +1554,7 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
formatted = lev.take(lab).format()
else:
# weird all NA case
- formatted = [str(x) for x in com.take_1d(lev.values, lab)]
+ formatted = [com.pprint_thing(x) for x in com.take_1d(lev.values, lab)]
stringified_levels.append(formatted)
result_levels = []
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 2dca8a2aef801..ae4a5d868b139 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -386,34 +386,70 @@ def __array_wrap__(self, result):
#----------------------------------------------------------------------
# Magic methods
- def __repr__(self):
+ def __str__(self):
+ """
+ Return a string representation for a particular Panel
+
+ Invoked by str(df) in both py2/py3.
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+
+ if py3compat.PY3:
+ return self.__unicode__()
+ return self.__bytes__()
+
+ def __bytes__(self):
+ """
+ Return a string representation for a particular Panel
+
+ Invoked by bytes(df) in py3 only.
+ Yields a bytestring in both py2/py3.
+ """
+ return com.console_encode(self.__unicode__())
+
+ def __unicode__(self):
+ """
+ Return a string representation for a particular Panel
+
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
+ """
+
class_name = str(self.__class__)
I, N, K = len(self.items), len(self.major_axis), len(self.minor_axis)
- dims = 'Dimensions: %d (items) x %d (major) x %d (minor)' % (I, N, K)
+ dims = u'Dimensions: %d (items) x %d (major) x %d (minor)' % (I, N, K)
if len(self.major_axis) > 0:
- major = 'Major axis: %s to %s' % (self.major_axis[0],
+ major = u'Major axis: %s to %s' % (self.major_axis[0],
self.major_axis[-1])
else:
- major = 'Major axis: None'
+ major = u'Major axis: None'
if len(self.minor_axis) > 0:
- minor = 'Minor axis: %s to %s' % (self.minor_axis[0],
- self.minor_axis[-1])
+ minor = u'Minor axis: %s to %s' % (com.pprint_thing(self.minor_axis[0]),
+ com.pprint_thing(self.minor_axis[-1]))
else:
- minor = 'Minor axis: None'
+ minor = u'Minor axis: None'
if len(self.items) > 0:
- items = 'Items: %s to %s' % (self.items[0], self.items[-1])
+ items = u'Items: %s to %s' % (com.pprint_thing(self.items[0]),
+ com.pprint_thing(self.items[-1]))
else:
- items = 'Items: None'
+ items = u'Items: None'
- output = '%s\n%s\n%s\n%s\n%s' % (class_name, dims, items, major, minor)
+ output = u'%s\n%s\n%s\n%s\n%s' % (class_name, dims, items, major, minor)
return output
+ def __repr__(self):
+ """
+ Return a string representation for a particular Panel
+
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+ return str(self)
+
def __iter__(self):
return iter(self.items)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 3241044a63c68..dc7588847775b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -858,8 +858,34 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
return df.reset_index(level=level, drop=drop)
- def __repr__(self):
- """Clean string representation of a Series"""
+
+ def __str__(self):
+ """
+ Return a string representation for a particular DataFrame
+
+ Invoked by str(df) in both py2/py3.
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+
+ if py3compat.PY3:
+ return self.__unicode__()
+ return self.__bytes__()
+
+ def __bytes__(self):
+ """
+ Return a string representation for a particular DataFrame
+
+ Invoked by bytes(df) in py3 only.
+ Yields a bytestring in both py2/py3.
+ """
+ return com.console_encode(self.__unicode__())
+
+ def __unicode__(self):
+ """
+ Return a string representation for a particular DataFrame
+
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
+ """
width, height = get_terminal_size()
max_rows = (height if fmt.print_config.max_rows == 0
else fmt.print_config.max_rows)
@@ -870,13 +896,24 @@ def __repr__(self):
length=len(self) > 50,
name=True)
else:
- result = '%s' % ndarray.__repr__(self)
+ result = com.pprint_thing(self)
- if py3compat.PY3:
- return unicode(result)
- return com.console_encode(result)
+ assert type(result) == unicode
+ return result
+
+ def __repr__(self):
+ """
+ Return a string representation for a particular Series
+
+ Yields Bytestring in Py2, Unicode String in py3.
+ """
+ return str(self)
def _tidy_repr(self, max_vals=20):
+ """
+
+ Internal function, should always return unicode string
+ """
num = max_vals // 2
head = self[:num]._get_repr(print_header=True, length=False,
name=False)
@@ -884,11 +921,13 @@ def _tidy_repr(self, max_vals=20):
length=False,
name=False)
result = head + '\n...\n' + tail
- return '%s\n%s' % (result, self._repr_footer())
+ result = '%s\n%s' % (result, self._repr_footer())
+
+ return unicode(result)
def _repr_footer(self):
- namestr = "Name: %s, " % com.pprint_thing(self.name) if self.name is not None else ""
- return '%sLength: %d' % (namestr, len(self))
+ namestr = u"Name: %s, " % com.pprint_thing(self.name) if self.name is not None else ""
+ return u'%sLength: %d' % (namestr, len(self))
def to_string(self, buf=None, na_rep='NaN', float_format=None,
nanRep=None, length=False, name=False):
@@ -921,6 +960,9 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None,
the_repr = self._get_repr(float_format=float_format, na_rep=na_rep,
length=length, name=name)
+
+ assert type(the_repr) == unicode
+
if buf is None:
return the_repr
else:
@@ -928,13 +970,17 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None,
def _get_repr(self, name=False, print_header=False, length=True,
na_rep='NaN', float_format=None):
+ """
+
+ Internal function, should always return unicode string
+ """
+
formatter = fmt.SeriesFormatter(self, name=name, header=print_header,
length=length, na_rep=na_rep,
float_format=float_format)
- return formatter.to_string()
-
- def __str__(self):
- return repr(self)
+ result = formatter.to_string()
+ assert type(result) == unicode
+ return result
def __iter__(self):
if np.issubdtype(self.dtype, np.datetime64):
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 10bb75bfbb5b6..0b5182acb7f72 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -135,7 +135,7 @@ def test_to_string_unicode_columns(self):
df.info(buf=buf)
buf.getvalue()
- result = self.frame.to_string(force_unicode=True)
+ result = self.frame.to_string()
self.assert_(isinstance(result, unicode))
def test_to_string_unicode_two(self):
@@ -495,7 +495,6 @@ def test_to_string_int_formatting(self):
self.assert_(issubclass(df['x'].dtype.type, np.integer))
output = df.to_string()
- self.assert_(isinstance(output, str))
expected = (' x\n'
'0 -15\n'
'1 20\n'
@@ -841,16 +840,16 @@ def test_to_string(self):
def test_to_string_mixed(self):
s = Series(['foo', np.nan, -1.23, 4.56])
result = s.to_string()
- expected = ('0 foo\n'
- '1 NaN\n'
- '2 -1.23\n'
- '3 4.56')
+ expected = (u'0 foo\n'
+ u'1 NaN\n'
+ u'2 -1.23\n'
+ u'3 4.56')
self.assertEqual(result, expected)
# but don't count NAs as floats
s = Series(['foo', np.nan, 'bar', 'baz'])
result = s.to_string()
- expected = ('0 foo\n'
+ expected = (u'0 foo\n'
'1 NaN\n'
'2 bar\n'
'3 baz')
@@ -858,7 +857,7 @@ def test_to_string_mixed(self):
s = Series(['foo', 5, 'bar', 'baz'])
result = s.to_string()
- expected = ('0 foo\n'
+ expected = (u'0 foo\n'
'1 5\n'
'2 bar\n'
'3 baz')
@@ -869,7 +868,7 @@ def test_to_string_float_na_spacing(self):
s[::2] = np.nan
result = s.to_string()
- expected = ('0 NaN\n'
+ expected = (u'0 NaN\n'
'1 1.5678\n'
'2 NaN\n'
'3 -3.0000\n'
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index fea84f5a86e36..4eb1be94e0846 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -27,6 +27,7 @@
from pandas.util.testing import (assert_almost_equal,
assert_series_equal,
assert_frame_equal)
+from pandas.util import py3compat
import pandas.util.testing as tm
import pandas.lib as lib
@@ -2916,6 +2917,21 @@ def test_repr_unicode(self):
result = repr(df)
self.assertEqual(result.split('\n')[0].rstrip(), ex_top)
+ def test_unicode_string_with_unicode(self):
+ df = DataFrame({'A': [u"\u05d0"]})
+
+ if py3compat.PY3:
+ str(df)
+ else:
+ unicode(df)
+
+ def test_bytestring_with_unicode(self):
+ df = DataFrame({'A': [u"\u05d0"]})
+ if py3compat.PY3:
+ bytes(df)
+ else:
+ str(df)
+
def test_very_wide_info_repr(self):
df = DataFrame(np.random.randn(10, 20),
columns=[tm.rands(10) for _ in xrange(20)])
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index b94840d0dfd85..4a86db9d67196 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -851,6 +851,21 @@ def test_print_unicode_columns(self):
df=pd.DataFrame({u"\u05d0":[1,2,3],"\u05d1":[4,5,6],"c":[7,8,9]})
print(df.columns) # should not raise UnicodeDecodeError
+ def test_unicode_string_with_unicode(self):
+ idx = Index(range(1000))
+
+ if py3compat.PY3:
+ str(idx)
+ else:
+ unicode(idx)
+
+ def test_bytestring_with_unicode(self):
+ idx = Index(range(1000))
+ if py3compat.PY3:
+ bytes(idx)
+ else:
+ str(idx)
+
class TestMultiIndex(unittest.TestCase):
def setUp(self):
@@ -1680,6 +1695,24 @@ def test_repr_with_unicode_data(self):
index=pd.DataFrame(d).set_index(["a","b"]).index
self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped
+ def test_unicode_string_with_unicode(self):
+ d={"a":[u"\u05d0",2,3],"b":[4,5,6],"c":[7,8,9]}
+ idx=pd.DataFrame(d).set_index(["a","b"]).index
+
+ if py3compat.PY3:
+ str(idx)
+ else:
+ unicode(idx)
+
+ def test_bytestring_with_unicode(self):
+ d={"a":[u"\u05d0",2,3],"b":[4,5,6],"c":[7,8,9]}
+ idx=pd.DataFrame(d).set_index(["a","b"]).index
+
+ if py3compat.PY3:
+ bytes(idx)
+ else:
+ str(idx)
+
def test_get_combined_index():
from pandas.core.index import _get_combined_index
result = _get_combined_index([])
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index a906489e67b57..96de4784fdc99 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1043,6 +1043,11 @@ def test_repr(self):
rep_str = repr(ser)
self.assert_("Name: 0" in rep_str)
+ def test_tidy_repr(self):
+ a=Series([u"\u05d0"]*1000)
+ a.name= 'title1'
+ repr(a) # should not raise exception
+
def test_repr_bool_fails(self):
s = Series([DataFrame(np.random.randn(2,2)) for i in range(5)])
@@ -1078,6 +1083,22 @@ def test_repr_should_return_str (self):
df=Series(data,index=index1)
self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
+
+ def test_unicode_string_with_unicode(self):
+ df = Series([u"\u05d0"],name=u"\u05d1")
+ if py3compat.PY3:
+ str(df)
+ else:
+ unicode(df)
+
+ def test_bytestring_with_unicode(self):
+ df = Series([u"\u05d0"],name=u"\u05d1")
+ if py3compat.PY3:
+ bytes(df)
+ else:
+ str(df)
+
+
def test_timeseries_repr_object_dtype(self):
index = Index([datetime(2000, 1, 1) + timedelta(i)
for i in range(1000)], dtype=object)
| closes #2225
**Note**: Although all the tests pass with minor fixes, this PR has an above-average chance of
breaking things for people who have relied on broken behaviour thus far.
`df.tidy_repr` combines several strings to produce a result. when one component is unicode
and other other is a non-ascii bytestring, it tries to convert the latter back to a unicode string
using the 'ascii' codec and fails.
I suggest that `_get_repr` -> `to_string` should always return unicode, as implemented by this PR,
and that the `force_unicode` argument be deprecated everyhwere.
The `force_unicode` argument in `to_string` conflates two things:
- which codec to use to decode the string (which can only be a hopeful guess)
- whether to return a unicode() object or str() object,
The first is now no longer necessary since `pprint_thing` already resorts to the same hack
of using utf-8 (with errors='replace') as a fallback.
I believe making the latter optional is wrong, precisely because it brings about situations
like the test case above.
`to_string`, like all internal functions , should utilize unicode objects, whenever feasible.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2224 | 2012-11-11T18:25:14Z | 2012-11-27T02:46:35Z | 2012-11-27T02:46:35Z | 2014-06-13T01:01:53Z |
CLN: use com._is_sequence instead of duplicating code | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 1ab2c3b7f8460..0cfb4004708fa 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -10,20 +10,9 @@
# "null slice"
_NS = slice(None, None)
-
-def _is_sequence(x):
- try:
- iter(x)
- assert(not isinstance(x, basestring))
- return True
- except Exception:
- return False
-
-
class IndexingError(Exception):
pass
-
class _NDFrameIndexer(object):
def __init__(self, obj):
@@ -149,7 +138,7 @@ def _align_series(self, indexer, ser):
if isinstance(indexer, tuple):
for i, idx in enumerate(indexer):
ax = self.obj.axes[i]
- if _is_sequence(idx) or isinstance(idx, slice):
+ if com._is_sequence(idx) or isinstance(idx, slice):
new_ix = ax[idx]
if ser.index.equals(new_ix):
return ser.values.copy()
@@ -174,7 +163,7 @@ def _align_frame(self, indexer, df):
idx, cols = None, None
for i, ix in enumerate(indexer):
ax = self.obj.axes[i]
- if _is_sequence(ix) or isinstance(ix, slice):
+ if com._is_sequence(ix) or isinstance(ix, slice):
if idx is None:
idx = ax[ix]
elif cols is None:
| https://api.github.com/repos/pandas-dev/pandas/pulls/2223 | 2012-11-11T12:46:01Z | 2012-11-12T16:23:10Z | 2012-11-12T16:23:10Z | 2012-11-12T16:23:20Z | |
beef-up tests of arith operators on panel and df | diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 0b36e8d39a00a..8ef2df02d6447 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3114,6 +3114,39 @@ def test_arith_mixed(self):
'B': [2, 4, 6]})
assert_frame_equal(result, expected)
+
+ def test_arith_getitem_commute(self):
+ df = DataFrame({'A' : [1.1,3.3],'B' : [2.5,-3.9]})
+
+ self._test_op(df, operator.add)
+ self._test_op(df, operator.sub)
+ self._test_op(df, operator.mul)
+ self._test_op(df, operator.truediv)
+ self._test_op(df, operator.floordiv)
+ self._test_op(df, operator.pow)
+
+ self._test_op(df, lambda x, y: y + x)
+ self._test_op(df, lambda x, y: y - x)
+ self._test_op(df, lambda x, y: y * x)
+ self._test_op(df, lambda x, y: y / x)
+ self._test_op(df, lambda x, y: y ** x)
+
+ self._test_op(df, lambda x, y: x + y)
+ self._test_op(df, lambda x, y: x - y)
+ self._test_op(df, lambda x, y: x * y)
+ self._test_op(df, lambda x, y: x / y)
+ self._test_op(df, lambda x, y: x ** y)
+
+ @staticmethod
+ def _test_op(df, op):
+ result = op(df, 1)
+
+ if not df.columns.is_unique:
+ raise ValueError("Only unique columns supported by this test")
+
+ for col in result.columns:
+ assert_series_equal(result[col], op(df[col], 1))
+
def test_bool_flex_frame(self):
data = np.random.randn(5, 3)
other_data = np.random.randn(5, 3)
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 202ed0ed3adb3..82c6ea65d133a 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -229,6 +229,12 @@ def test_arith(self):
self._test_op(self.panel, lambda x, y: y / x)
self._test_op(self.panel, lambda x, y: y ** x)
+ self._test_op(self.panel, lambda x, y: x + y) # panel + 1
+ self._test_op(self.panel, lambda x, y: x - y) # panel - 1
+ self._test_op(self.panel, lambda x, y: x * y) # panel * 1
+ self._test_op(self.panel, lambda x, y: x / y) # panel / 1
+ self._test_op(self.panel, lambda x, y: x ** y) # panel ** 1
+
self.assertRaises(Exception, self.panel.__add__, self.panel['ItemA'])
@staticmethod
| https://api.github.com/repos/pandas-dev/pandas/pulls/2222 | 2012-11-11T12:41:17Z | 2012-11-12T16:41:42Z | null | 2014-06-16T09:15:24Z | |
fixes for #2218, #2219 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 31c1a09f409c3..05d3713375481 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -633,7 +633,8 @@ def keys(self):
def iteritems(self):
"""Iterator over (column, series) pairs"""
- return ((k, self[k]) for k in self.columns)
+ for i, k in enumerate(self.columns):
+ yield (k,self.take([i],axis=1)[k])
def iterrows(self):
"""
@@ -836,6 +837,10 @@ def to_dict(self, outtype='dict'):
-------
result : dict like {column -> {index -> value}}
"""
+ import warnings
+ if not self.columns.is_unique:
+ warnings.warn("DataFrame columns are not unique, some "
+ "columns will be omitted.",UserWarning)
if outtype.lower().startswith('d'):
return dict((k, v.to_dict()) for k, v in self.iteritems())
elif outtype.lower().startswith('l'):
@@ -1796,13 +1801,18 @@ def _getitem_array(self, key):
indexer = self.columns.get_indexer(key)
mask = indexer == -1
if mask.any():
- raise KeyError("No column(s) named: %s" % str(key[mask]))
+ raise KeyError("No column(s) named: %s" %
+ com.pprint_thing(key[mask]))
result = self.reindex(columns=key)
if result.columns.name is None:
result.columns.name = self.columns.name
return result
else:
mask = self.columns.isin(key)
+ for k in key:
+ if k not in self.columns:
+ raise KeyError("No column(s) named: %s" %
+ com.pprint_thing(k))
return self.take(mask.nonzero()[0], axis=1)
def _slice(self, slobj, axis=0):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 0b36e8d39a00a..01b5d6ae46fd4 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -62,6 +62,15 @@ def test_getitem(self):
self.assert_('random' not in self.frame)
self.assertRaises(Exception, self.frame.__getitem__, 'random')
+ def test_getitem_dupe_cols(self):
+ df=DataFrame([[1,2,3],[4,5,6]],columns=['a','a','b'])
+ try:
+ df[['baf']]
+ except KeyError:
+ pass
+ else:
+ self.fail("Dataframe failed to raise KeyError")
+
def test_get(self):
b = self.frame.get('B')
assert_series_equal(b, self.frame['B'])
@@ -1136,6 +1145,11 @@ def test_get_value(self):
expected = self.frame[col][idx]
assert_almost_equal(result, expected)
+ def test_iteritems(self):
+ df=DataFrame([[1,2,3],[4,5,6]],columns=['a','a','b'])
+ for k,v in df.iteritems():
+ self.assertEqual(type(v),Series)
+
def test_lookup(self):
def alt(df, rows, cols):
result = []
@@ -7449,6 +7463,7 @@ def __nonzero__(self):
self.assert_(r0.all())
self.assert_(r1.all())
+
if __name__ == '__main__':
# unittest.main()
import nose
| 4a5b75b (the fix for #2219) triggers the issue in #2220 which makes a test fail,
afiact, that's a genuine issue.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2221 | 2012-11-11T12:30:20Z | 2012-11-13T23:27:47Z | 2012-11-13T23:27:47Z | 2014-07-07T19:30:39Z |
Small tweaks to vb_suite | diff --git a/vb_suite/run_suite.py b/vb_suite/run_suite.py
old mode 100644
new mode 100755
index febd9d1fa6cad..0c03d17607f4e
--- a/vb_suite/run_suite.py
+++ b/vb_suite/run_suite.py
@@ -1,3 +1,4 @@
+#!/usr/bin/env python
from vbench.api import BenchmarkRunner
from suite import *
diff --git a/vb_suite/suite.py b/vb_suite/suite.py
index 0a7c4eb9945f7..4d38388548984 100644
--- a/vb_suite/suite.py
+++ b/vb_suite/suite.py
@@ -56,9 +56,9 @@
DB_PATH = config.get('setup', 'db_path')
TMP_DIR = config.get('setup', 'tmp_dir')
except:
- REPO_PATH = os.path.join(HOME, 'code/pandas')
+ REPO_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__),"../"))
REPO_URL = 'git@github.com:pydata/pandas.git'
- DB_PATH = os.path.join(HOME, 'code/pandas/vb_suite/benchmarks.db')
+ DB_PATH = os.path.join(REPO_PATH, 'vb_suite/benchmarks.db')
TMP_DIR = os.path.join(HOME, 'tmp/vb_pandas')
PREPARE = """
| I had a couple of false starts with vbench in the past, Hopefully these trivial changes
will help make it work out of the box for more people.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2216 | 2012-11-10T14:23:31Z | 2012-11-13T23:51:46Z | null | 2013-12-04T00:57:53Z |
BUG: coerce ndarray dtype to object when comparing series | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 4194cbd4e4156..b6e1448514112 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -110,7 +110,10 @@ def na_op(x, y):
y = lib.list_to_object_array(y)
if isinstance(y, np.ndarray):
- result = lib.vec_compare(x, y, op)
+ if y.dtype != np.object_:
+ result = lib.vec_compare(x, y.astype(np.object_), op)
+ else:
+ result = lib.vec_compare(x, y, op)
else:
result = lib.scalar_compare(x, y, op)
else:
| fixes #1926 (partialy at least)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2214 | 2012-11-09T23:34:41Z | 2012-11-14T00:02:28Z | null | 2014-07-02T07:41:20Z |
ENH: eliminate _str() in favor of pprint_thing | diff --git a/pandas/core/format.py b/pandas/core/format.py
index aae911ba807ef..58353a823e7e5 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -405,13 +405,6 @@ def _get_column_name_list(self):
names.append('' if columns.name is None else columns.name)
return names
-
-def _str(x):
- if not isinstance(x, basestring):
- return str(x)
- return x
-
-
class HTMLFormatter(object):
indent_delta = 2
@@ -436,7 +429,7 @@ def _maybe_bold_row(x):
self._maybe_bold_row = _maybe_bold_row
def write(self, s, indent=0):
- self.elements.append(' ' * indent + _str(s))
+ self.elements.append(' ' * indent + com.pprint_thing(s))
def write_th(self, s, indent=0, tags=None):
return self._write_cell(s, kind='th', indent=indent, tags=tags)
@@ -449,7 +442,7 @@ def _write_cell(self, s, kind='td', indent=0, tags=None):
start_tag = '<%s %s>' % (kind, tags)
else:
start_tag = '<%s>' % kind
- self.write('%s%s</%s>' % (start_tag, _str(s), kind), indent)
+ self.write('%s%s</%s>' % (start_tag, com.pprint_thing(s), kind), indent)
def write_tr(self, line, indent=0, indent_delta=4, header=False,
align=None, tags=None):
| https://api.github.com/repos/pandas-dev/pandas/pulls/2206 | 2012-11-09T17:59:02Z | 2012-11-09T19:12:37Z | null | 2012-11-09T19:12:37Z | |
set link target for faq site | diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 0f676ba6066de..2a3620f8ae50c 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -21,6 +21,7 @@ Frequently Asked Questions (FAQ)
import matplotlib.pyplot as plt
plt.close('all')
+.. _ref-scikits-migration:
Migrating from scikits.timeseries to pandas >= 0.8.0
----------------------------------------------------
| https://api.github.com/repos/pandas-dev/pandas/pulls/2204 | 2012-11-09T12:39:28Z | 2012-11-09T16:44:15Z | null | 2012-11-09T16:44:15Z | |
link the info on scikits.timeseries | diff --git a/doc/source/related.rst b/doc/source/related.rst
index e613d34c2a29f..33dad8115e5b1 100644
--- a/doc/source/related.rst
+++ b/doc/source/related.rst
@@ -45,3 +45,13 @@ customizable by the user (so 5-minutely data is easier to do with pandas for
example).
We are aiming to merge these libraries together in the near future.
+
+Progress:
+
+ - It has a collection of moving window statistics implemented in
+ `Bottleneck <http://pandas.pydata.org/developers.html#development-roadmap>`__
+ - `Outstanding issues <https://github.com/pydata/pandas/issues?labels=timeseries&milestone=&page=1&state=open>`__
+
+Summarising, Pandas offers superior functionality due to its combination with the :py:class:`pandas.DataFrame`.
+
+An introduction for former users of :mod:`scikits.timeseries` is provided in the :ref:`migration guide <ref-scikits-migration>`.
\ No newline at end of file
| link to the FAQ, devel pages and issue list.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2203 | 2012-11-09T12:37:26Z | 2012-11-09T17:19:06Z | null | 2013-12-04T00:57:52Z |
PR: unicode, mostly | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 7bbbaab49e864..46c28e8af52ac 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1140,6 +1140,9 @@ def pprint_thing(thing, _nest_lvl=0):
from pandas.core.format import print_config
if thing is None:
result = ''
+ elif (py3compat.PY3 and hasattr(thing,'__next__')) or \
+ hasattr(thing,'next'):
+ return unicode(thing)
elif (isinstance(thing, dict) and
_nest_lvl < print_config.pprint_nest_depth):
result = _pprint_dict(thing, _nest_lvl)
diff --git a/pandas/core/format.py b/pandas/core/format.py
index aae911ba807ef..4505e6153a9a3 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -268,7 +268,8 @@ def to_string(self, force_unicode=False):
if len(frame.columns) == 0 or len(frame.index) == 0:
info_line = (u'Empty %s\nColumns: %s\nIndex: %s'
% (type(self.frame).__name__,
- frame.columns, frame.index))
+ com.pprint_thing(frame.columns),
+ com.pprint_thing(frame.index)))
text = info_line
else:
strcols = self._to_str_columns(force_unicode)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5a000485d85a4..2c3bc9a31c9b6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3915,11 +3915,12 @@ def _apply_standard(self, func, axis, ignore_failures=False):
try:
if hasattr(e, 'args'):
k = res_index[i]
- e.args = e.args + ('occurred at index %s' % str(k),)
+ e.args = e.args + ('occurred at index %s' %
+ com.pprint_thing(k),)
except (NameError, UnboundLocalError): # pragma: no cover
# no k defined yet
pass
- raise
+ raise e
if len(results) > 0 and _is_sequence(results[0]):
if not isinstance(results[0], Series):
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 1ba78c698a1b5..291502c406018 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -209,9 +209,10 @@ def __str__(self):
try:
return np.array_repr(self.values)
except UnicodeError:
- converted = u','.join(unicode(x) for x in self.values)
- return u'%s([%s], dtype=''%s'')' % (type(self).__name__, converted,
+ converted = u','.join(com.pprint_thing(x) for x in self.values)
+ result = u'%s([%s], dtype=''%s'')' % (type(self).__name__, converted,
str(self.values.dtype))
+ return com.console_encode(result)
def _mpl_repr(self):
# how to represent ourselves to matplotlib
@@ -1320,11 +1321,15 @@ def __repr__(self):
self[-50:].values])
else:
values = self.values
- summary = np.array2string(values, max_line_width=70)
+
+ summary = com.pprint_thing(values)
np.set_printoptions(threshold=options['threshold'])
- return output % summary
+ if py3compat.PY3:
+ return output % summary
+ else:
+ return com.console_encode(output % summary)
def __len__(self):
return len(self.labels[0])
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 10a85c5592514..cd1ca8838d65d 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -49,9 +49,10 @@ def set_ref_items(self, ref_items, maybe_rename=True):
self.ref_items = ref_items
def __repr__(self):
- shape = ' x '.join([str(s) for s in self.shape])
+ shape = ' x '.join([com.pprint_thing(s) for s in self.shape])
name = type(self).__name__
- return '%s: %s, %s, dtype %s' % (name, self.items, shape, self.dtype)
+ result = '%s: %s, %s, dtype %s' % (name, self.items, shape, self.dtype)
+ return com.console_encode(result) # repr must return byte-string
def __contains__(self, item):
return item in self.items
@@ -935,7 +936,7 @@ def _find_block(self, item):
def _check_have(self, item):
if item not in self.items:
- raise KeyError('no item named %s' % str(item))
+ raise KeyError('no item named %s' % com.pprint_thing(item))
def reindex_axis(self, new_axis, method=None, axis=0, copy=True):
new_axis = _ensure_index(new_axis)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 5c5fd1902c4cc..57799c6455fee 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1295,6 +1295,21 @@ def test_iget_value(self):
expected = self.frame.get_value(row, col)
assert_almost_equal(result, expected)
+ def test_nested_exception(self):
+ # Ignore the strange way of triggering the problem
+ # (which may get fixed), it's just a way to trigger
+ # the issue or reraising an outer exception without
+ # a named argument
+ df=DataFrame({"a":[1,2,3],"b":[4,5,6],"c":[7,8,9]}).set_index(["a","b"])
+ l=list(df.index)
+ l[0]=["a","b"]
+ df.index=l
+
+ try:
+ print df
+ except Exception,e:
+ self.assertNotEqual(type(e),UnboundLocalError)
+
_seriesd = tm.getSeriesData()
_tsd = tm.getTimeSeriesData()
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index c1d0894f9bfef..b94840d0dfd85 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -847,6 +847,10 @@ def test_int_name_format(self):
repr(s)
repr(df)
+ def test_print_unicode_columns(self):
+ df=pd.DataFrame({u"\u05d0":[1,2,3],"\u05d1":[4,5,6],"c":[7,8,9]})
+ print(df.columns) # should not raise UnicodeDecodeError
+
class TestMultiIndex(unittest.TestCase):
def setUp(self):
@@ -1671,6 +1675,10 @@ def test_tolist(self):
exp = list(self.index.values)
self.assertEqual(result, exp)
+ def test_repr_with_unicode_data(self):
+ d={"a":[u"\u05d0",2,3],"b":[4,5,6],"c":[7,8,9]}
+ index=pd.DataFrame(d).set_index(["a","b"]).index
+ self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped
def test_get_combined_index():
from pandas.core.index import _get_combined_index
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 57ccfff23e5de..e9c0b2ae980d6 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -408,6 +408,13 @@ def test_get_numeric_data(self):
self.assertEqual(rs.ix[0, 'bool'], not df.ix[0, 'bool'])
+ def test_missing_unicode_key(self):
+ df=DataFrame({"a":[1]})
+ try:
+ df.ix[:,u"\u05d0"] # should not raise UnicodeEncodeError
+ except KeyError:
+ pass # this is the expected exception
+
if __name__ == '__main__':
# unittest.main()
import nose
| f2db4c1 fixes the `UnboundLocalError` mentioned in #2200.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2201 | 2012-11-08T23:41:48Z | 2012-11-09T17:29:02Z | 2012-11-09T17:29:02Z | 2014-06-16T22:34:21Z |
BUG: join_non_unique doesn't sort properly for DatetimeIndex #2196 | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 7bbbaab49e864..60a0c30a49d78 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -852,6 +852,13 @@ def is_integer_dtype(arr_or_dtype):
(issubclass(tipo, np.datetime64) or
issubclass(tipo, np.timedelta64)))
+def _is_int_or_datetime_dtype(arr_or_dtype):
+ # also timedelta64
+ if isinstance(arr_or_dtype, np.dtype):
+ tipo = arr_or_dtype.type
+ else:
+ tipo = arr_or_dtype.dtype.type
+ return issubclass(tipo, np.integer)
def is_datetime64_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index d92ed1cb01c42..62529201b287c 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -555,7 +555,7 @@ def _right_outer_join(x, y, max_groups):
def _factorize_keys(lk, rk, sort=True):
- if com.is_integer_dtype(lk) and com.is_integer_dtype(rk):
+ if com._is_int_or_datetime_dtype(lk) and com._is_int_or_datetime_dtype(rk):
klass = lib.Int64Factorizer
lk = com._ensure_int64(lk)
rk = com._ensure_int64(rk)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index eabacc2222ebf..daaa86f681ee1 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1645,6 +1645,14 @@ def _check_join(left, right, how='inner'):
_check_join(index[:15], obj_index[5:], how='right')
_check_join(index[:15], obj_index[5:], how='left')
+ def test_join_nonunique(self):
+ idx1 = to_datetime(['2012-11-06 16:00:11.477563',
+ '2012-11-06 16:00:11.477563'])
+ idx2 = to_datetime(['2012-11-06 15:11:09.006507',
+ '2012-11-06 15:11:09.006507'])
+ rs = idx1.join(idx2, how='outer')
+ self.assert_(rs.is_monotonic)
+
def test_unpickle_daterange(self):
pth, _ = os.path.split(os.path.abspath(__file__))
filepath = os.path.join(pth, 'data', 'daterange_073.pickle')
| use the int64 factorizer
| https://api.github.com/repos/pandas-dev/pandas/pulls/2197 | 2012-11-08T15:48:53Z | 2012-11-09T16:43:07Z | null | 2012-11-09T16:43:07Z |
Updating help text for plot_series | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 00724a2dc35a0..98cf676c60a4d 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1301,7 +1301,9 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
Parameters
----------
label : label argument to provide to plot
- kind : {'line', 'bar'}
+ kind : {'line', 'bar', 'barh'}
+ bar : vertical bar plot
+ barh : horizontal bar plot
rot : int, default 30
Rotation for tick labels
use_index : boolean, default True
@@ -1312,9 +1314,6 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
matplotlib line style to use
ax : matplotlib axis object
If not passed, uses gca()
- kind : {'line', 'bar', 'barh'}
- bar : vertical bar plot
- barh : horizontal bar plot
logy : boolean, default False
For line plots, use log scaling on y axis
xticks : sequence
| description for parameter 'kind' was given twice.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2195 | 2012-11-08T04:30:52Z | 2012-11-08T14:23:06Z | 2012-11-08T14:23:06Z | 2012-11-09T23:53:16Z |
corrected version of ts_hourly | diff --git a/pandas/tseries/pivot.py b/pandas/tseries/pivot.py
new file mode 100644
index 0000000000000..632e6bdab4324
--- /dev/null
+++ b/pandas/tseries/pivot.py
@@ -0,0 +1,206 @@
+import numpy as np
+
+from pandas.core.frame import DataFrame
+import pandas.core.nanops as nanops
+from pandas.tseries.util import isleapyear
+from pandas.tseries.index import date_range
+
+def pivot_annual_h(series, freq=None, dt_index=False):
+ """
+ Group a series by years, taking leap years into account.
+
+ The output has as many rows as distinct years in the original series,
+ and as many columns as the length of a leap year in the units corresponding
+ to the original frequency (366 for daily frequency, 366*24 for hourly...).
+ The fist column of the output corresponds to Jan. 1st, 00:00:00,
+ while the last column corresponds to Dec, 31st, 23:59:59.
+ Entries corresponding to Feb. 29th are masked for non-leap years.
+
+ For example, if the initial series has a daily frequency, the 59th column
+ of the output always corresponds to Feb. 28th, the 61st column to Mar. 1st,
+ and the 60th column is masked for non-leap years.
+ With a hourly initial frequency, the (59*24)th column of the output always
+ correspond to Feb. 28th 23:00, the (61*24)th column to Mar. 1st, 00:00, and
+ the 24 columns between (59*24) and (61*24) are masked.
+
+ If the original frequency is less than daily, the output is equivalent to
+ ``series.convert('A', func=None)``.
+
+ Parameters
+ ----------
+ series : TimeSeries
+ freq : string or None, default None
+
+ Returns
+ -------
+ annual : DataFrame
+
+
+ """
+ #TODO: test like original pandas and the position of first and last value in arrays
+ #TODO: reduce number of hardcoded values scattered all around.
+ index = series.index
+ year = index.year
+ years = nanops.unique1d(year)
+
+ if freq is not None:
+ freq = freq.upper()
+ else:
+ freq = series.index.freq
+
+ if freq == 'H':
+
+ ##basics
+
+ #integer value of sum of all hours in a leap hear
+ total_hoy_leap = (year_length(series.index.freqstr))
+
+ #list of all hours in a leap year
+ hoy_leap_list = range(1, (total_hoy_leap + 1 ))
+
+
+ #create a array template
+ values = np.empty((total_hoy_leap, len(years)), dtype=series.dtype)
+ values.fill(np.nan)
+ #create a df to receive the resulting data
+ dummy_df = DataFrame(values, index=hoy_leap_list,
+ columns=years)
+
+ ##prepare the index for inserting the values into the result dataframe
+ #get offset for leap hours
+ #see:
+ #http://stackoverflow.com/questions/2004364/increment-numpy-array-with-repeated-indices
+ #1994-02-28 23:00:00 -> index 1415
+ index_nonleap = np.array(range(0, 8760))
+ index_leapshift = np.array(range(1416,8760 ))
+
+ index_incl_leap = index_nonleap.copy()
+ #shift index by 24 (hours) for leap
+ index_incl_leap[index_leapshift]+=24
+
+ # select data for the respective year
+ for year in years:
+
+ #select the data for the respective year
+ series_year = series[ series.index.year == year]
+ #create a array with the values for the respecive year
+ values = (series_year).values
+
+ if isleapyear(year):
+ dummy_df[year] = values
+ else:
+ #dummy array to be filled with non-leap values
+ dummy_array = np.empty((total_hoy_leap), dtype=series.dtype)
+ dummy_array.fill(np.nan)
+
+ #fill dummy array with values leaving the leap day
+ dummy_array.put(index_incl_leap, values)
+
+ dummy_df[year] = dummy_array
+
+ res_df = dummy_df
+
+ #assign a pseudo datetime index , CAUTION: the year is definitely wrong!
+ if dt_index:
+ rng = default_rng(freq='H', leap=True)
+ res_df = DataFrame(res_df.values, index=rng,
+ columns=res_df.columns)
+
+ return res_df
+
+#TDOO: use pivot_annual for D & M and minute in the same fashion
+ if freq == 'D':
+ raise NotImplementedError(freq), "use pandas.tseries.util.pivot_annual"
+
+ if freq == 'M':
+ raise NotImplementedError(freq), "use pandas.tseries.util.pivot_annual"
+
+ else:
+ raise NotImplementedError(freq)
+
+
+ return res_df
+
+
+### timeseries pivoting helper
+
+def last_col2front(df, col_no=1):
+ """shifts the last column of a data frame to the front
+
+ increase col_no to shift more cols
+ """
+ cols = cols = df.columns.tolist()
+ #increase index value to 2+ if more columns are to be shifted
+ cols = cols[-col_no:] + cols[:-col_no]
+ df = df[cols]
+
+ return df
+
+
+def extended_info(df, time_cols=True, aggreg=True, aggreg_func=None,
+ datetime_index=False):
+ """add extended information to a timeseries pivot
+ """
+
+ df_extended = df.copy()
+ #perform the following only on the data columns
+ cols = df_extended.columns
+ #TODO: add standard aggregation
+ #TODO: make function be set by argument
+ #TODO: is there no a SM describe function?
+ #TODO: Maybe use http://pandas.pydata.org/pandas-docs/dev/basics.html#summarizing-data-describe
+ if aggreg:
+
+ df_extended['mean'] = df_extended[cols].mean(1)
+ df_extended['sum'] = df_extended[cols].sum(1)
+ df_extended['min'] = df_extended[cols].min(1)
+ df_extended['max'] = df_extended[cols].max(1)
+ df_extended['std'] = df_extended[cols].std(1)
+
+ #add some metadata
+ #TODO: add function to make index a datetime with the argument above using the rng below
+ #TODO: convert the range to lower frequencies and reuse the function.
+ rng = default_rng()
+ df_extended['doy'] = rng.dayofyear
+# df_extended = last_col2front(df_extended)
+ df_extended['month'] = rng.month
+# df_extended = last_col2front(df_extended)
+ df_extended['day'] = rng.day
+# df_extended = last_col2front(df_extended)
+ df_extended['hour'] = rng.hour + 1
+ df_extended = last_col2front(df_extended, col_no=4)
+
+ return df_extended
+
+###Timeseries convenience / helper functions
+
+
+def year_length(freq, leap=True):
+ """helper function for year length at different frequencies.
+ to be expanded
+ """
+
+ daysofyear_leap = 366
+ daysofyear_nonleap = 365
+
+ if freq == 'H':
+ if leap:
+ length = 24 * daysofyear_leap
+ else:
+ length = 24 * daysofyear_nonleap
+
+ return length
+
+def default_rng(freq='H', leap=True):
+ """create default ranges
+ """
+
+ if leap:
+ total_hoy_leap = (year_length(freq='H'))
+ rng = date_range('1/1/2012', periods=total_hoy_leap, freq='H')
+
+ else:
+ total_hoy_nonleap = (year_length(freq='H'))
+ rng = date_range('1/1/2011', periods=total_hoy_nonleap, freq='H')
+
+ return rng
diff --git a/pandas/tseries/tests/test_util.py b/pandas/tseries/tests/test_util.py
index 1b634d2e4bf24..2548714fe76ec 100644
--- a/pandas/tseries/tests/test_util.py
+++ b/pandas/tseries/tests/test_util.py
@@ -10,6 +10,75 @@
from pandas.tseries.tools import normalize_date
from pandas.tseries.util import pivot_annual, isleapyear
+from pandas.tseries import pivot
+
+
+class TestPivotAnnualHourly(unittest.TestCase):
+ """
+ New pandas of scikits.timeseries pivot_annual for hourly with a new shape
+ """
+ def test_hourly(self):
+ rng_hourly = date_range('1/1/1994', periods=(18* 8760 + 4*24), freq='H')
+ data_hourly = np.random.randint(100, high=350, size=rng_hourly.size)
+ data_hourly = data_hourly.astype('float64')
+ ts_hourly = Series(data_hourly, index=rng_hourly)
+
+ annual = pivot.pivot_annual_h(ts_hourly, dt_index=True)
+
+ ### general
+ ##test first column: if first value and data are the same as first value of timeseries
+ #date
+ def get_mdh(DatetimeIndex, index):
+ #(m, d, h)
+ mdh_tuple = (DatetimeIndex.month[index], DatetimeIndex.day[index],
+ DatetimeIndex.hour[index])
+ return mdh_tuple
+# ts_hourly.index.month[1], ts_hourly.index.month[1], ts_hourly.index.month[1]
+
+ assert get_mdh(ts_hourly.index, 1) == get_mdh(annual.index, 1)
+ #are the last dates of ts identical with the dates last row in the last column?
+ assert get_mdh(ts_hourly.index, -1) == get_mdh(annual.index, (annual.index.size -1))
+ #first values of the ts identical with the first col?
+ assert ts_hourly[0] == annual.ix[0].values[0]
+ #last values of the ts identical with the last col and last row of the df?
+ assert ts_hourly[-1] == annual.ix[-1].values[-1]
+ #### index
+ ##test if index has the right length
+ assert annual.index.size == 8784
+ ##test last column: if first value and data are the same as first value of timeseries
+ ### leap
+ ##test leap offset
+ #leap year: 1996 - are the values of the ts and the
+ ser96_leap = ts_hourly[(ts_hourly.index.year == 1996) &
+ (ts_hourly.index.month == 2) &
+ (ts_hourly.index.day == 29)
+ ]
+
+ df96 = annual[1996]
+ df96_leap = df96[(df96.index.month == 2) & (df96.index.day == 29)]
+ np.testing.assert_equal(ser96_leap.values, df96_leap.values)
+ #non-leap year: 1994 - are all values NaN for day 29.02?
+ nan_arr = np.empty(24)
+ nan_arr.fill(np.nan)
+ df94 = annual[1994]
+ df94_noleap = df94[(df94.index.month == 2) & (df94.index.day == 29)]
+ np.testing.assert_equal(df94_noleap.values, nan_arr)
+ ### extended functionaliy
+ ext = pivot.extended_info(annual)
+ ## descriptive statistics
+ #mean
+ np.testing.assert_equal(annual.mean(1).values, ext['mean'].values)
+ np.testing.assert_equal(annual.sum(1).values, ext['sum'].values)
+ np.testing.assert_equal(annual.min(1).values, ext['min'].values)
+ np.testing.assert_equal(annual.max(1).values, ext['max'].values)
+ np.testing.assert_equal(annual.std(1).values, ext['std'].values)
+
+ ## additional time columns for easier filtering
+ np.testing.assert_equal(ext['doy'].values, annual.index.dayofyear)
+ np.testing.assert_equal(ext['day'].values, annual.index.day)
+ #the hour is incremented by 1
+ np.testing.assert_equal(ext['hour'].values, (annual.index.hour +1))
+
class TestPivotAnnual(unittest.TestCase):
"""
@@ -36,6 +105,7 @@ def test_daily(self):
leaps.index = leaps.index.year
tm.assert_series_equal(annual[day].dropna(), leaps)
+
def test_weekly(self):
pass
diff --git a/pandas/tseries/util.py b/pandas/tseries/util.py
index 0702bc40389c9..853784f6ece4e 100644
--- a/pandas/tseries/util.py
+++ b/pandas/tseries/util.py
@@ -2,6 +2,7 @@
from pandas.core.frame import DataFrame
import pandas.core.nanops as nanops
+from pandas.tseries.util import isleapyear
def pivot_annual(series, freq=None):
| small glitch in the function but mainly fixes in tests
| https://api.github.com/repos/pandas-dev/pandas/pulls/2183 | 2012-11-05T23:45:15Z | 2012-12-03T19:02:18Z | null | 2014-06-27T22:56:10Z |
CLN: Some minor cleanup in io/tests/test_parsers.py | diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 041fcb9808df1..23c6eec259384 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -859,22 +859,17 @@ def test_parse_cols_list(self):
for s in suffix:
pth = os.path.join(self.dirpath, 'test.xls%s' % s)
- xlsx = ExcelFile(pth)
- df = xlsx.parse('Sheet1', index_col=0, parse_dates=True,
+ xls = ExcelFile(pth)
+ df = xls.parse('Sheet1', index_col=0, parse_dates=True,
parse_cols=[0, 2, 3])
df2 = read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = df2.reindex(columns=['B', 'C'])
- df3 = xlsx.parse('Sheet2', skiprows=[1], index_col=0,
+ df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
parse_dates=True,
parse_cols=[0, 2, 3])
assert_frame_equal(df, df2)
assert_frame_equal(df3, df2)
- def test_read_table_unicode(self):
- fin = StringIO('\u0141aski, Jan;1')
- df1 = read_table(fin, sep=";", encoding="utf-8", header=None)
- self.assert_(isinstance(df1['X0'].values[0], unicode))
-
def test_parse_cols_str(self):
_skip_if_no_openpyxl()
_skip_if_no_xlrd()
@@ -917,6 +912,11 @@ def test_parse_cols_str(self):
assert_frame_equal(df, df2)
assert_frame_equal(df3, df2)
+ def test_read_table_unicode(self):
+ fin = StringIO('\u0141aski, Jan;1')
+ df1 = read_table(fin, sep=";", encoding="utf-8", header=None)
+ self.assert_(isinstance(df1['X0'].values[0], unicode))
+
def test_read_table_wrong_num_columns(self):
data = """A,B,C,D,E,F
1,2,3,4,5
| https://api.github.com/repos/pandas-dev/pandas/pulls/2181 | 2012-11-05T13:58:18Z | 2012-11-05T15:13:12Z | null | 2012-11-05T15:13:12Z | |
CLN: Fixed a typo in the documentation. | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index fa133d45e8863..a77e2c928abfa 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -91,7 +91,7 @@ the data structures:
There is an analogous ``set_value`` method which has the additional capability
of enlarging an object. This method *always* returns a reference to the object
-it modified, which in the fast of enlargement, will be a **new object**:
+it modified, which in the case of enlargement, will be a **new object**:
.. ipython:: python
| https://api.github.com/repos/pandas-dev/pandas/pulls/2180 | 2012-11-05T13:21:00Z | 2012-11-05T15:14:02Z | null | 2012-11-05T15:14:02Z | |
CLN: Typo fix in documentation | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index fa133d45e8863..a77e2c928abfa 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -91,7 +91,7 @@ the data structures:
There is an analogous ``set_value`` method which has the additional capability
of enlarging an object. This method *always* returns a reference to the object
-it modified, which in the fast of enlargement, will be a **new object**:
+it modified, which in the case of enlargement, will be a **new object**:
.. ipython:: python
| https://api.github.com/repos/pandas-dev/pandas/pulls/2178 | 2012-11-05T12:52:22Z | 2012-11-05T12:53:08Z | null | 2012-11-05T12:57:02Z | |
CLN: fix indent in bench_merge_sqlite | diff --git a/bench/bench_merge_sqlite.py b/bench/bench_merge_sqlite.py
index 14a5288e44f20..a05a7c896b3d2 100644
--- a/bench/bench_merge_sqlite.py
+++ b/bench/bench_merge_sqlite.py
@@ -74,8 +74,8 @@
conn.commit()
sql_results[sort][join_method] = elapsed
-sql_results.columns = ['sqlite3'] # ['dont_sort', 'sort']
-sql_results.index = ['inner', 'outer', 'left']
+ sql_results.columns = ['sqlite3'] # ['dont_sort', 'sort']
+ sql_results.index = ['inner', 'outer', 'left']
sql = """select *
from left
| was breaking Rope.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2177 | 2012-11-05T12:27:10Z | 2012-11-05T15:15:00Z | null | 2014-06-22T13:36:18Z |
Misc fixes: comment cleanup and remove deprecated arg | diff --git a/pandas/core/common.py b/pandas/core/common.py
index a7a53b24ae277..7bbbaab49e864 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1147,11 +1147,12 @@ def pprint_thing(thing, _nest_lvl=0):
result = _pprint_seq(thing, _nest_lvl)
else:
# when used internally in the package, everything
- # passed in should be a unicode object or have a unicode
- # __str__. However as an aid to transition, we also accept
- # utf8 encoded strings, if that's not it, we have no way
- # to know, and the user should deal with it himself.
- # so we resort to utf-8 with replacing errors
+ # should be unicode text. However as an aid to transition
+ # we also accept utf8 encoded strings,
+ # if that's not it either, we have no way of knowing,
+ # and the user should deal with it himself.
+ # we resort to utf-8 with replacing errors, rather then throwing
+ # an exception.
try:
result = unicode(thing) # we should try this first
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c05880a201a7e..559d02974fa2a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -834,7 +834,7 @@ def to_dict(self, outtype='dict'):
@classmethod
def from_records(cls, data, index=None, exclude=None, columns=None,
- names=None, coerce_float=False):
+ coerce_float=False):
"""
Convert structured or record ndarray to DataFrame
@@ -866,13 +866,6 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
raise ValueError('Non-unique columns not yet supported in '
'from_records')
- if names is not None: # pragma: no cover
- columns = names
- warnings.warn("'names' parameter to DataFrame.from_records is "
- "being renamed to 'columns', 'names' will be "
- "removed in 0.8.0",
- FutureWarning)
-
if isinstance(data, (np.ndarray, DataFrame, dict)):
columns, sdict = _rec_to_dict(data)
else:
diff --git a/tox.ini b/tox.ini
index f4e03e1677344..7d09b3aa887e1 100644
--- a/tox.ini
+++ b/tox.ini
@@ -25,11 +25,11 @@ commands =
/bin/rm -rf {toxinidir}/build
# quietly rollback the install.
- # Note this line will only be reached if the tests
+ # Note this line will only be reached if the
# previous lines succeed (in particular, the tests),
# but an uninstall is really only required when
- # files are removed from source tree, in which case,
- # stale versions of files will will remain in the venv,
+ # files are removed from the source tree, in which case,
+ # stale versions of files will will remain in the venv
# until the next time uninstall is run.
#
# tox should provide a preinstall-commands hook.
diff --git a/tox_prll.ini b/tox_prll.ini
index a9e54bacd281e..85856db064ca3 100644
--- a/tox_prll.ini
+++ b/tox_prll.ini
@@ -26,11 +26,11 @@ commands =
/bin/rm -rf {toxinidir}/build
# quietly rollback the install.
- # Note this line will only be reached if the tests
+ # Note this line will only be reached if the
# previous lines succeed (in particular, the tests),
# but an uninstall is really only required when
- # files are removed from source tree, in which case,
- # stale versions of files will will remain in the venv,
+ # files are removed from the source tree, in which case,
+ # stale versions of files will will remain in the venv
# until the next time uninstall is run.
#
# tox should provide a preinstall-commands hook.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2176 | 2012-11-05T08:21:54Z | 2012-11-05T15:16:01Z | null | 2012-11-05T15:16:01Z | |
BUG: Series.diff for integer dtypes #2087 | diff --git a/pandas/core/common.py b/pandas/core/common.py
index e7829ab4b41d5..bcb05c9f1bd0f 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -373,6 +373,35 @@ def mask_out_axis(arr, mask, axis, fill_value=np.nan):
arr[tuple(indexer)] = fill_value
+def diff(arr, n, indexer, axis=0):
+ out_arr = arr - arr.take(indexer, axis=axis)
+ out_arr = _maybe_upcast(out_arr)
+
+ if axis == 0:
+ if n > 0:
+ out_arr[:n] = np.nan
+ elif n < 0:
+ out_arr[n:] = np.nan
+ else:
+ out_arr[:] = np.nan
+ elif axis == 1:
+ if n > 0:
+ out_arr[:, :n] = np.nan
+ elif n < 0:
+ out_arr[:, n:] = np.nan
+ else:
+ out_arr[:, :] = np.nan
+ elif axis == 2:
+ if n > 0:
+ out_arr[:, :, :n] = np.nan
+ elif n < 0:
+ out_arr[:, :, n:] = np.nan
+ else:
+ out_arr[:, :, :] = np.nan
+ else:
+ raise NotImplementedError()
+ return out_arr
+
def take_fast(arr, indexer, mask, needs_masking, axis=0, out=None,
fill_value=np.nan):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9a035429bbd0e..cac08bec15c0f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -29,7 +29,8 @@
from pandas.core.generic import NDFrame
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import _NDFrameIndexer, _maybe_droplevels
-from pandas.core.internals import BlockManager, make_block, form_blocks
+from pandas.core.internals import (BlockManager, make_block, form_blocks,
+ IntBlock)
from pandas.core.series import Series, _radd_compat
from pandas.compat.scipy import scoreatpercentile as _quantile
from pandas.util import py3compat
@@ -3679,7 +3680,10 @@ def diff(self, periods=1):
-------
diffed : DataFrame
"""
- return self - self.shift(periods)
+ indexer = com._shift_indexer(len(self), periods)
+ new_blocks = [b.diff(periods, indexer) for b in self._data.blocks]
+ new_data = BlockManager(new_blocks, [self.columns, self.index])
+ return self._constructor(new_data)
def shift(self, periods=1, freq=None, **kwds):
"""
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 08de0de51aeeb..ead9e47a3be82 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -294,6 +294,13 @@ def take(self, indexer, axis=1, fill_value=np.nan):
def get_values(self, dtype):
return self.values
+ def diff(self, n, indexer=None):
+ if indexer is None:
+ indexer = com._shift_indexer(self.shape[1], n)
+ new_values = com.diff(self.values, n, indexer, axis=1)
+ return make_block(new_values, self.items, self.ref_items)
+
+
def _mask_missing(array, missing_values):
if not isinstance(missing_values, (list, np.ndarray)):
missing_values = [missing_values]
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1b105d48f4dbe..6ca8a01a18434 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1511,7 +1511,9 @@ def diff(self, periods=1):
-------
diffed : Series
"""
- return (self - self.shift(periods))
+ indexer = com._shift_indexer(len(self), periods)
+ val = com.diff(self.values, periods, indexer)
+ return Series(val, self.index, name=self.name)
def autocorr(self):
"""
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index f069a65a1ab12..3d6e49b48c7db 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -5215,6 +5215,14 @@ def test_diff(self):
assert_series_equal(the_diff['A'],
self.tsframe['A'] - self.tsframe['A'].shift(1))
+ # int dtype
+ a = 10000000000000000
+ b = a + 1
+ s = Series([a, b])
+
+ rs = DataFrame({'s': s}).diff()
+ self.assertEqual(rs.s[1], 1)
+
def test_diff_mixed_dtype(self):
df = DataFrame(np.random.randn(5, 3))
df['A'] = np.array([1, 2, 3, 4, 5], dtype=object)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 1f1b3285fb22d..03bfccba83e72 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2998,6 +2998,14 @@ def test_diff(self):
# Just run the function
self.ts.diff()
+ # int dtype
+ a = 10000000000000000
+ b = a + 1
+ s = Series([a, b])
+
+ rs = s.diff()
+ self.assertEqual(rs[1], 1)
+
def test_pct_change(self):
rs = self.ts.pct_change(fill_method=None)
assert_series_equal(rs, self.ts / self.ts.shift(1) - 1)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2174 | 2012-11-04T21:26:39Z | 2012-11-04T23:03:21Z | null | 2014-06-17T00:19:46Z | |
BUG: stop overridding matplotlib unit registrations for datetime.datetim... | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 97cae5e351fe2..bae624c2f83fa 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -16,12 +16,13 @@
from pandas.tseries.frequencies import get_period_alias, get_base_alias
from pandas.tseries.offsets import DateOffset
+"""
try: # mpl optional
import pandas.tseries.converter as conv
conv.register()
except ImportError:
pass
-
+"""
def _get_standard_kind(kind):
return {'density': 'kde'}.get(kind, kind)
@@ -742,6 +743,12 @@ def _get_xticks(self, convert_period=False):
return x
+ def _is_datetype(self):
+ index = self.data.index
+ return (isinstance(index, (PeriodIndex, DatetimeIndex)) or
+ index.inferred_type in ('datetime', 'date', 'datetime64',
+ 'time'))
+
def _get_plot_function(self):
if self.logy:
plotf = self.plt.Axes.semilogy
@@ -906,7 +913,22 @@ def _maybe_add_color(self, colors, kwds, style, i):
if style is None or re.match('[a-z]+', style) is None:
kwds['color'] = colors[i % len(colors)]
+ def _make_formatter_locator(self):
+ import pandas.tseries.converter as conv
+ index = self.data.index
+ if (isinstance(index, DatetimeIndex) or
+ index.inferred_type in ('datetime', 'datetime64', 'date')):
+ tz = getattr(index, 'tz', None)
+ loc = conv.PandasAutoDateLocator(tz=tz)
+ fmt = conv.PandasAutoDateFormatter(loc, tz=tz)
+ return fmt, loc
+ if index.inferred_type == 'time':
+ loc = conv.AutoLocator()
+ fmt = conv.TimeFormatter(loc)
+ return fmt, loc
+
def _make_plot(self):
+ import pandas.tseries.plotting as tsplot
# this is slightly deceptive
if self.use_index and self._use_dynamic_x():
data = self._maybe_convert_index(self.data)
@@ -946,6 +968,12 @@ def _make_plot(self):
labels.append(leg_label)
ax.grid(self.grid)
+ if self._is_datetype():
+ _maybe_format_dateaxis(ax, *self._make_formatter_locator())
+ left, right = _get_xlim(lines)
+ print 'xlim: ', left, right
+ ax.set_xlim(left, right)
+
self._make_legend(lines, labels)
def _make_ts_plot(self, data, **kwargs):
@@ -1909,6 +1937,23 @@ def on_right(i):
return fig, axes
+def _maybe_format_dateaxis(ax, formatter, locator):
+ from matplotlib import pylab
+ ax.xaxis.set_major_locator(locator)
+ ax.xaxis.set_major_formatter(formatter)
+ pylab.draw_if_interactive()
+
+
+def _get_xlim(lines):
+ import pandas.tseries.converter as conv
+ left, right = np.inf, -np.inf
+ for l in lines:
+ x = l.get_xdata()
+ left = min(conv._dt_to_float_ordinal(x[0]), left)
+ right = max(conv._dt_to_float_ordinal(x[-1]), right)
+ return left, right
+
+
if __name__ == '__main__':
# import pandas.rpy.common as com
# sales = com.load_data('sanfrancisco.home.sales', package='nutshell')
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index 78455a5e46259..485d776b9b70f 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -22,13 +22,9 @@
def register():
- units.registry[pydt.time] = TimeConverter()
units.registry[lib.Timestamp] = DatetimeConverter()
- units.registry[pydt.date] = DatetimeConverter()
- units.registry[pydt.datetime] = DatetimeConverter()
units.registry[Period] = PeriodConverter()
-
def _to_ordinalf(tm):
tot_sec = (tm.hour * 3600 + tm.minute * 60 + tm.second +
float(tm.microsecond / 1e6))
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index 70b36ff7ef8c7..20f0393bedfc3 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -22,11 +22,9 @@
from pandas.tseries.converter import (PeriodConverter, TimeSeries_DateLocator,
TimeSeries_DateFormatter)
-units.registry[Period] = PeriodConverter()
#----------------------------------------------------------------------
# Plotting functions and monkey patches
-
def tsplot(series, plotf, **kwargs):
"""
Plots a Series on the given Matplotlib axes or the current axes
| ...e/date/time but keep registration for pandas specific classes Period and Timestamp
| https://api.github.com/repos/pandas-dev/pandas/pulls/2173 | 2012-11-04T20:27:09Z | 2012-11-05T04:06:45Z | null | 2012-11-06T07:17:43Z |
BUG: start_time end_time to_timestamp bugs #2124 #2125 #1764 | diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index d7557e38c1680..763c34717abb1 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -4,7 +4,8 @@
from datetime import datetime, date
import numpy as np
-from pandas.tseries.frequencies import (get_freq_code as _gfc, to_offset,
+import pandas.tseries.offsets as offsets
+from pandas.tseries.frequencies import (get_freq_code as _gfc,
_month_numbers, FreqGroup)
from pandas.tseries.index import DatetimeIndex, Int64Index, Index
from pandas.tseries.tools import parse_time_string
@@ -180,19 +181,21 @@ def asfreq(self, freq, how='E'):
@property
def start_time(self):
- return self.to_timestamp(how='S')
+ return self.to_timestamp('s', how='S')
@property
def end_time(self):
- return self.to_timestamp(how='E')
+ return self.to_timestamp('s', how='E')
- def to_timestamp(self, freq=None, how='S'):
+ def to_timestamp(self, freq=None, how='start'):
"""
- Return the Timestamp at the start/end of the period
+ Return the Timestamp representation of the Period at the target
+ frequency at the specified end (how) of the Period
Parameters
----------
- freq : string or DateOffset, default frequency of PeriodIndex
+ freq : string or DateOffset, default is 'D' if self.freq is week or
+ longer and 'S' otherwise
Target frequency
how: str, default 'S' (start)
'S', 'E'. Can be aliased as case insensitive
@@ -202,20 +205,16 @@ def to_timestamp(self, freq=None, how='S'):
-------
Timestamp
"""
+ how = _validate_end_alias(how)
+
if freq is None:
base, mult = _gfc(self.freq)
- how = _validate_end_alias(how)
- if how == 'S':
- base = _freq_mod.get_to_timestamp_base(base)
- freq = _freq_mod._get_freq_str(base)
- new_val = self.asfreq(freq, how)
- else:
- new_val = self
- else:
- base, mult = _gfc(freq)
- new_val = self.asfreq(freq, how)
+ freq = _freq_mod.get_to_timestamp_base(base)
+
+ base, mult = _gfc(freq)
+ val = self.asfreq(freq, how)
- dt64 = plib.period_ordinal_to_dt64(new_val.ordinal, base)
+ dt64 = plib.period_ordinal_to_dt64(val.ordinal, base)
return Timestamp(dt64)
year = _period_field_accessor('year', 0)
@@ -765,7 +764,8 @@ def to_timestamp(self, freq=None, how='start'):
Parameters
----------
- freq : string or DateOffset, default 'D'
+ freq : string or DateOffset, default 'D' for week or longer, 'S'
+ otherwise
Target frequency
how : {'s', 'e', 'start', 'end'}
@@ -773,12 +773,14 @@ def to_timestamp(self, freq=None, how='start'):
-------
DatetimeIndex
"""
+ how = _validate_end_alias(how)
+
if freq is None:
base, mult = _gfc(self.freq)
- new_data = self
- else:
- base, mult = _gfc(freq)
- new_data = self.asfreq(freq, how)
+ freq = _freq_mod.get_to_timestamp_base(base)
+
+ base, mult = _gfc(freq)
+ new_data = self.asfreq(freq, how)
new_data = plib.periodarr_to_dt64arr(new_data.values, base)
return DatetimeIndex(new_data, freq='infer', name=self.name)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 41dd949620fe4..9fcd8f630bcd2 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -215,12 +215,12 @@ def test_to_timestamp(self):
start_ts = p.to_timestamp(how='S')
aliases = ['s', 'StarT', 'BEGIn']
for a in aliases:
- self.assertEquals(start_ts, p.to_timestamp(how=a))
+ self.assertEquals(start_ts, p.to_timestamp('D', how=a))
end_ts = p.to_timestamp(how='E')
aliases = ['e', 'end', 'FINIsH']
for a in aliases:
- self.assertEquals(end_ts, p.to_timestamp(how=a))
+ self.assertEquals(end_ts, p.to_timestamp('D', how=a))
from_lst = ['A', 'Q', 'M', 'W', 'B',
'D', 'H', 'Min', 'S']
@@ -231,7 +231,7 @@ def test_to_timestamp(self):
self.assertEquals(p.start_time, p.to_timestamp(how='S'))
- self.assertEquals(p.end_time, p.to_timestamp(how='E'))
+ self.assertEquals(p.end_time, p.to_timestamp('s', how='E'))
# Frequency other than daily
@@ -245,8 +245,8 @@ def test_to_timestamp(self):
expected = datetime(1985, 12, 31, 23, 59)
self.assertEquals(result, expected)
- result = p.to_timestamp('S', how='end')
- expected = datetime(1985, 12, 31, 23, 59, 59)
+ result = p.to_timestamp(how='end')
+ expected = datetime(1985, 12, 31)
self.assertEquals(result, expected)
expected = datetime(1985, 1, 1)
@@ -272,28 +272,30 @@ def test_start_time(self):
def test_end_time(self):
p = Period('2012', freq='A')
- xp = datetime(2012, 12, 31)
+ xp = datetime(2012, 12, 31, 23, 59, 59)
self.assertEquals(xp, p.end_time)
p = Period('2012', freq='Q')
- xp = datetime(2012, 3, 31)
+ xp = datetime(2012, 3, 31, 23, 59, 59)
self.assertEquals(xp, p.end_time)
p = Period('2012', freq='M')
- xp = datetime(2012, 1, 31)
+ xp = datetime(2012, 1, 31, 23, 59, 59)
self.assertEquals(xp, p.end_time)
- xp = datetime(2012, 1, 1)
- freq_lst = ['D', 'H', 'T', 'S']
- for f in freq_lst:
- p = Period('2012', freq=f)
- self.assertEquals(p.end_time, xp)
+ xp = datetime(2012, 1, 1, 23, 59, 59)
+ p = Period('2012', freq='D')
+ self.assertEquals(p.end_time, xp)
+
+ xp = datetime(2012, 1, 1, 0, 59, 59)
+ p = Period('2012', freq='H')
+ self.assertEquals(p.end_time, xp)
self.assertEquals(Period('2012', freq='B').end_time,
- datetime(2011, 12, 30))
+ datetime(2011, 12, 30, 23, 59, 59))
self.assertEquals(Period('2012', freq='W').end_time,
- datetime(2012, 1, 1))
+ datetime(2012, 1, 1, 23, 59, 59))
def test_properties_annually(self):
@@ -1200,12 +1202,12 @@ def test_to_timestamp(self):
series = Series(1, index=index, name='foo')
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
- result = series.to_timestamp('D', 'end')
+ result = series.to_timestamp(how='end')
self.assert_(result.index.equals(exp_index))
self.assertEquals(result.name, 'foo')
exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-DEC')
- result = series.to_timestamp('D', 'start')
+ result = series.to_timestamp(how='start')
self.assert_(result.index.equals(exp_index))
@@ -1230,6 +1232,15 @@ def _get_with_delta(delta, freq='A-DEC'):
self.assertRaises(ValueError, index.to_timestamp, '5t')
+ index = PeriodIndex(freq='H', start='1/1/2001', end='1/2/2001')
+ series = Series(1, index=index, name='foo')
+
+ exp_index = date_range('1/1/2001 00:59:59', end='1/2/2001 00:59:59',
+ freq='H')
+ result = series.to_timestamp(how='end')
+ self.assert_(result.index.equals(exp_index))
+ self.assertEquals(result.name, 'foo')
+
def test_to_timestamp_quarterly_bug(self):
years = np.arange(1960, 2000).repeat(4)
quarters = np.tile(range(1, 5), 40)
| @wesm can you review this plz?
The main change is I have to_timestamp default to second frequency now to get the first and last second of the Period. This doesn't **quite** solve #2125 since Timestamp resolution goes down to Nanos. So the question is is it appropriate to add 999999999ns to the end_time when ns doesn't exist asa period freq?
| https://api.github.com/repos/pandas-dev/pandas/pulls/2170 | 2012-11-03T20:49:25Z | 2012-11-04T21:18:12Z | 2012-11-04T21:18:12Z | 2014-07-13T09:26:33Z |
Plot color | diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index efb3252a66209..35786c242082b 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -107,7 +107,7 @@ def test_bar_linewidth(self):
self.assert_(r.get_linewidth() == 2)
@slow
- def test_1rotation(self):
+ def test_rotation(self):
df = DataFrame(np.random.randn(5, 5))
ax = df.plot(rot=30)
for l in ax.get_xticklabels():
@@ -447,6 +447,24 @@ def test_style_by_column(self):
for i, l in enumerate(ax.get_lines()[:len(markers)]):
self.assertEqual(l.get_marker(), markers[i])
+ @slow
+ def test_line_colors(self):
+ import matplotlib.pyplot as plt
+
+ custom_colors = 'rgcby'
+
+ plt.close('all')
+ df = DataFrame(np.random.randn(5, 5))
+
+ ax = df.plot(color=custom_colors)
+
+ lines = ax.get_lines()
+ for i, l in enumerate(lines):
+ xp = custom_colors[i]
+ rs = l.get_color()
+ self.assert_(xp == rs)
+
+
class TestDataFrameGroupByPlots(unittest.TestCase):
@classmethod
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 25bcff3c54545..23d3df8a9511c 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -2,6 +2,7 @@
# pylint: disable=E1101
from itertools import izip
import datetime
+import warnings
import re
import numpy as np
@@ -852,6 +853,14 @@ class LinePlot(MPLPlot):
def __init__(self, data, **kwargs):
self.mark_right = kwargs.pop('mark_right', True)
MPLPlot.__init__(self, data, **kwargs)
+ if 'color' not in self.kwds and 'colors' in self.kwds:
+ warnings.warn(("'colors' is being deprecated. Please use 'color'"
+ "instead of 'colors'"))
+ colors = self.kwds.pop('colors')
+ self.kwds['color'] = colors
+ if 'color' in self.kwds and isinstance(self.data, Series):
+ #support series.plot(color='green')
+ self.kwds['color'] = [self.kwds['color']]
def _index_freq(self):
from pandas.core.frame import DataFrame
@@ -889,14 +898,12 @@ def _use_dynamic_x(self):
def _get_colors(self):
import matplotlib.pyplot as plt
cycle = ''.join(plt.rcParams.get('axes.color_cycle', list('bgrcmyk')))
- has_colors = 'colors' in self.kwds
- colors = self.kwds.pop('colors', cycle)
- return has_colors, colors
-
- def _maybe_add_color(self, has_colors, colors, kwds, style, i):
- if (not has_colors and
- (style is None or re.match('[a-z]+', style) is None)
- and 'color' not in kwds):
+ has_colors = 'color' in self.kwds
+ colors = self.kwds.get('color', cycle)
+ return colors
+
+ def _maybe_add_color(self, colors, kwds, style, i):
+ if style is None or re.match('[a-z]+', style) is None:
kwds['color'] = colors[i % len(colors)]
def _make_plot(self):
@@ -910,13 +917,13 @@ def _make_plot(self):
x = self._get_xticks(convert_period=True)
plotf = self._get_plot_function()
- has_colors, colors = self._get_colors()
+ colors = self._get_colors()
for i, (label, y) in enumerate(self._iter_data()):
ax = self._get_ax(i)
style = self._get_style(i, label)
kwds = self.kwds.copy()
- self._maybe_add_color(has_colors, colors, kwds, style, i)
+ self._maybe_add_color(colors, kwds, style, i)
label = com.pprint_thing(label) # .encode('utf-8')
@@ -944,7 +951,7 @@ def _make_plot(self):
def _make_ts_plot(self, data, **kwargs):
from pandas.tseries.plotting import tsplot
kwargs = kwargs.copy()
- has_colors, colors = self._get_colors()
+ colors = self._get_colors()
plotf = self._get_plot_function()
lines = []
@@ -960,7 +967,7 @@ def to_leg_label(label, i):
style = self.style or ''
label = com.pprint_thing(self.label)
kwds = kwargs.copy()
- self._maybe_add_color(has_colors, colors, kwds, style, 0)
+ self._maybe_add_color(colors, kwds, style, 0)
newlines = tsplot(data, plotf, ax=ax, label=label,
style=self.style, **kwds)
@@ -975,7 +982,7 @@ def to_leg_label(label, i):
style = self._get_style(i, col)
kwds = kwargs.copy()
- self._maybe_add_color(has_colors, colors, kwds, style, i)
+ self._maybe_add_color(colors, kwds, style, i)
newlines = tsplot(data[col], plotf, ax=ax, label=label,
style=style, **kwds)
@@ -1096,7 +1103,7 @@ def f(ax, x, y, w, start=None, **kwds):
return f
def _make_plot(self):
- colors = self.kwds.get('color', 'brgyk')
+ colors = self.kwds.pop('color', 'brgyk')
rects = []
labels = []
| Deprecate 'colors' parameter in LinePlot. Otherwise frame.plot(color='rgb') fails but frame.plot(kind='bar', color='rgb') works.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2169 | 2012-11-03T15:38:56Z | 2012-11-03T16:22:45Z | 2012-11-03T16:22:45Z | 2012-11-03T16:22:45Z |
BUG: xs MultiIndex integer level problem #2107 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index bd220a008c9e8..d389d8c93cd2c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2025,6 +2025,11 @@ def xs(self, key, axis=0, level=None, copy=True):
indexer = tuple(indexer)
else:
indexer = loc
+ lev_num = labels._get_level_number(level)
+ lev = labels.levels[lev_num]
+ is_int_type = com.is_integer_dtype(lev)
+ if is_int_type:
+ indexer = self.index[loc]
result = self.ix[indexer]
setattr(result, result._get_axis_name(axis), new_ax)
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 34872ea572f81..acd1e3251e64c 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -359,6 +359,18 @@ def test_xs_level_multiple(self):
expected = df.xs('a').xs(4, level='four')
assert_frame_equal(result, expected)
+ #GH2107
+ import itertools
+ from string import letters
+ dates = range(20111201, 20111205)
+ ids = letters[:5]
+ idx = MultiIndex.from_tuples([x for x in itertools.product(dates, ids)])
+ idx.names = ['date', 'secid']
+ df = DataFrame(np.random.randn(len(idx), 3), idx, ['X', 'Y', 'Z'])
+ rs = df.xs(20111201, level='date')
+ xp = df.ix[20111201, :]
+ assert_frame_equal(rs, xp)
+
def test_xs_level0(self):
from pandas import read_table
from StringIO import StringIO
| purely positional indexing would solve this problem better
| https://api.github.com/repos/pandas-dev/pandas/pulls/2166 | 2012-11-03T02:21:51Z | 2012-11-03T15:23:42Z | null | 2012-11-03T15:23:42Z |
BUG: exclude timedelta64 from integer dtypes #2146 | diff --git a/pandas/core/common.py b/pandas/core/common.py
index c52de85670c5e..a4d7d1bf9a898 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -90,6 +90,8 @@ def _isnull_ndarraylike(obj):
elif values.dtype == np.dtype('M8[ns]'):
# this is the NaT pattern
result = values.view('i8') == lib.iNaT
+ elif issubclass(values.dtype.type, np.timedelta64):
+ result = -np.isfinite(values.view('i8'))
else:
result = -np.isfinite(obj)
return result
@@ -800,7 +802,8 @@ def is_integer_dtype(arr_or_dtype):
else:
tipo = arr_or_dtype.dtype.type
return (issubclass(tipo, np.integer) and not
- issubclass(tipo, np.datetime64))
+ (issubclass(tipo, np.datetime64) or
+ issubclass(tipo, np.timedelta64)))
def is_datetime64_dtype(arr_or_dtype):
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 30a35a5bea269..9a2e148257229 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -879,6 +879,11 @@ def test_float_trim_zeros(self):
for line in repr(Series(vals)).split('\n'):
self.assert_('+10' in line)
+ def test_timedelta64(self):
+ Series(np.array([1100, 20], dtype='timedelta64[s]')).to_string()
+ #check this works
+ #GH2146
+
class TestEngFormatter(unittest.TestCase):
| com.is_integer_dtype now returns False for dtype timedelta64 #2146
| https://api.github.com/repos/pandas-dev/pandas/pulls/2165 | 2012-11-02T17:39:08Z | 2012-11-02T18:20:09Z | null | 2014-06-25T18:03:15Z |
ENH: rank na_options top and bottom #1508 | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index cb7314a26689f..21915da2c4402 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -191,11 +191,12 @@ def rank(values, axis=0, method='average', na_option='keep',
"""
if values.ndim == 1:
f, values = _get_data_algo(values, _rank1d_functions)
- ranks = f(values, ties_method=method, ascending=ascending)
+ ranks = f(values, ties_method=method, ascending=ascending,
+ na_option=na_option)
elif values.ndim == 2:
f, values = _get_data_algo(values, _rank2d_functions)
ranks = f(values, axis=axis, ties_method=method,
- ascending=ascending)
+ ascending=ascending, na_option=na_option)
return ranks
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0f270c6f6e546..902b3a19a6bfa 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4704,8 +4704,10 @@ def rank(self, axis=0, numeric_only=None, method='average',
min: lowest rank in group
max: highest rank in group
first: ranks assigned in order they appear in the array
- na_option : {'keep'}
+ na_option : {'keep', 'top', 'bottom'}
keep: leave NA values where they are
+ top: smallest rank if ascending
+ bottom: smallest rank if descending
ascending : boolean, default True
False for ranks by high (1) to low (N)
@@ -4716,7 +4718,7 @@ def rank(self, axis=0, numeric_only=None, method='average',
if numeric_only is None:
try:
ranks = algos.rank(self.values, axis=axis, method=method,
- ascending=ascending)
+ ascending=ascending, na_option=na_option)
return DataFrame(ranks, index=self.index, columns=self.columns)
except TypeError:
numeric_only = True
@@ -4726,7 +4728,7 @@ def rank(self, axis=0, numeric_only=None, method='average',
else:
data = self
ranks = algos.rank(data.values, axis=axis, method=method,
- ascending=ascending)
+ ascending=ascending, na_option=na_option)
return DataFrame(ranks, index=data.index, columns=data.columns)
def to_timestamp(self, freq=None, how='start', axis=0, copy=True):
diff --git a/pandas/src/stats.pyx b/pandas/src/stats.pyx
index f4d87f411a97e..0fc7d30713e79 100644
--- a/pandas/src/stats.pyx
+++ b/pandas/src/stats.pyx
@@ -70,7 +70,8 @@ cdef _take_2d_object(ndarray[object, ndim=2] values,
return result
-def rank_1d_float64(object in_arr, ties_method='average', ascending=True):
+def rank_1d_float64(object in_arr, ties_method='average', ascending=True,
+ na_option='keep'):
"""
Fast NaN-friendly version of scipy.stats.rankdata
"""
@@ -86,7 +87,7 @@ def rank_1d_float64(object in_arr, ties_method='average', ascending=True):
values = np.asarray(in_arr).copy()
- if ascending:
+ if ascending ^ (na_option == 'top'):
nan_value = np.inf
else:
nan_value = -np.inf
@@ -115,7 +116,7 @@ def rank_1d_float64(object in_arr, ties_method='average', ascending=True):
sum_ranks += i + 1
dups += 1
val = sorted_data[i]
- if val == nan_value:
+ if (val == nan_value) and (na_option == 'keep'):
ranks[argsorted[i]] = nan
continue
if i == n - 1 or fabs(sorted_data[i + 1] - val) > FP_ERR:
@@ -138,7 +139,8 @@ def rank_1d_float64(object in_arr, ties_method='average', ascending=True):
return ranks
-def rank_1d_int64(object in_arr, ties_method='average', ascending=True):
+def rank_1d_int64(object in_arr, ties_method='average', ascending=True,
+ na_option='keep'):
"""
Fast NaN-friendly version of scipy.stats.rankdata
"""
@@ -198,7 +200,7 @@ def rank_1d_int64(object in_arr, ties_method='average', ascending=True):
def rank_2d_float64(object in_arr, axis=0, ties_method='average',
- ascending=True):
+ ascending=True, na_option='keep'):
"""
Fast NaN-friendly version of scipy.stats.rankdata
"""
@@ -219,7 +221,7 @@ def rank_2d_float64(object in_arr, axis=0, ties_method='average',
else:
values = in_arr.copy()
- if ascending:
+ if ascending ^ (na_option == 'top'):
nan_value = np.inf
else:
nan_value = -np.inf
@@ -249,7 +251,7 @@ def rank_2d_float64(object in_arr, axis=0, ties_method='average',
sum_ranks += j + 1
dups += 1
val = values[i, j]
- if val == nan_value:
+ if val == nan_value and na_option == 'keep':
ranks[i, argsorted[i, j]] = nan
continue
if j == k - 1 or fabs(values[i, j + 1] - val) > FP_ERR:
@@ -277,7 +279,7 @@ def rank_2d_float64(object in_arr, axis=0, ties_method='average',
def rank_2d_int64(object in_arr, axis=0, ties_method='average',
- ascending=True):
+ ascending=True, na_option='keep'):
"""
Fast NaN-friendly version of scipy.stats.rankdata
"""
@@ -345,7 +347,7 @@ def rank_2d_int64(object in_arr, axis=0, ties_method='average',
def rank_1d_generic(object in_arr, bint retry=1, ties_method='average',
- ascending=True):
+ ascending=True, na_option='keep'):
"""
Fast NaN-friendly version of scipy.stats.rankdata
"""
@@ -365,7 +367,7 @@ def rank_1d_generic(object in_arr, bint retry=1, ties_method='average',
if values.dtype != np.object_:
values = values.astype('O')
- if ascending:
+ if ascending ^ (na_option == 'top'):
# always greater than everything
nan_value = Infinity()
else:
@@ -401,7 +403,7 @@ def rank_1d_generic(object in_arr, bint retry=1, ties_method='average',
sum_ranks += i + 1
dups += 1
val = util.get_value_at(sorted_data, i)
- if val is nan_value:
+ if val is nan_value and na_option=='keep':
ranks[argsorted[i]] = nan
continue
if (i == n - 1 or
@@ -450,7 +452,7 @@ class NegInfinity(object):
__cmp__ = _return_true
def rank_2d_generic(object in_arr, axis=0, ties_method='average',
- ascending=True):
+ ascending=True, na_option='keep'):
"""
Fast NaN-friendly version of scipy.stats.rankdata
"""
@@ -475,7 +477,7 @@ def rank_2d_generic(object in_arr, axis=0, ties_method='average',
if values.dtype != np.object_:
values = values.astype('O')
- if ascending:
+ if ascending ^ (na_option == 'top'):
# always greater than everything
nan_value = Infinity()
else:
@@ -510,7 +512,7 @@ def rank_2d_generic(object in_arr, axis=0, ties_method='average',
dups = sum_ranks = infs = 0
for j in range(k):
val = values[i, j]
- if val is nan_value:
+ if val is nan_value and na_option == 'keep':
ranks[i, argsorted[i, j]] = nan
infs += 1
continue
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c989e8c981231..d0cdf07c4a2c2 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -6444,6 +6444,73 @@ def test_rank2(self):
expected = self.mixed_frame.rank(1, numeric_only=True)
assert_frame_equal(result, expected)
+ def test_rank_na_option(self):
+ from pandas.compat.scipy import rankdata
+
+ self.frame['A'][::2] = np.nan
+ self.frame['B'][::3] = np.nan
+ self.frame['C'][::4] = np.nan
+ self.frame['D'][::5] = np.nan
+
+ #bottom
+ ranks0 = self.frame.rank(na_option='bottom')
+ ranks1 = self.frame.rank(1, na_option='bottom')
+
+ fvals = self.frame.fillna(np.inf).values
+
+ exp0 = np.apply_along_axis(rankdata, 0, fvals)
+ exp1 = np.apply_along_axis(rankdata, 1, fvals)
+
+ assert_almost_equal(ranks0.values, exp0)
+ assert_almost_equal(ranks1.values, exp1)
+
+ #top
+ ranks0 = self.frame.rank(na_option='top')
+ ranks1 = self.frame.rank(1, na_option='top')
+
+ fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values
+ fval1 = self.frame.T
+ fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T
+ fval1 = fval1.fillna(np.inf).values
+
+ exp0 = np.apply_along_axis(rankdata, 0, fval0)
+ exp1 = np.apply_along_axis(rankdata, 1, fval1)
+
+ assert_almost_equal(ranks0.values, exp0)
+ assert_almost_equal(ranks1.values, exp1)
+
+ #descending
+
+ #bottom
+ ranks0 = self.frame.rank(na_option='top', ascending=False)
+ ranks1 = self.frame.rank(1, na_option='top', ascending=False)
+
+ fvals = self.frame.fillna(np.inf).values
+
+ exp0 = np.apply_along_axis(rankdata, 0, -fvals)
+ exp1 = np.apply_along_axis(rankdata, 1, -fvals)
+
+ assert_almost_equal(ranks0.values, exp0)
+ assert_almost_equal(ranks1.values, exp1)
+
+ #descending
+
+ #top
+ ranks0 = self.frame.rank(na_option='bottom', ascending=False)
+ ranks1 = self.frame.rank(1, na_option='bottom', ascending=False)
+
+ fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values
+ fval1 = self.frame.T
+ fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T
+ fval1 = fval1.fillna(np.inf).values
+
+ exp0 = np.apply_along_axis(rankdata, 0, -fval0)
+ exp1 = np.apply_along_axis(rankdata, 1, -fval1)
+
+ assert_almost_equal(ranks0.values, exp0)
+ assert_almost_equal(ranks1.values, exp1)
+
+
def test_describe(self):
desc = self.tsframe.describe()
desc = self.mixed_frame.describe()
| #1508
| https://api.github.com/repos/pandas-dev/pandas/pulls/2159 | 2012-11-02T02:04:40Z | 2012-11-02T16:55:47Z | null | 2014-06-12T04:55:31Z |
Ts pivot hourly | diff --git a/pandas/tseries/pivot.py b/pandas/tseries/pivot.py
new file mode 100644
index 0000000000000..9792ef1f60550
--- /dev/null
+++ b/pandas/tseries/pivot.py
@@ -0,0 +1,207 @@
+import numpy as np
+
+from pandas.core.frame import DataFrame
+import pandas.core.nanops as nanops
+from pandas.tseries.util import isleapyear
+from pandas.tseries.index import date_range
+
+def pivot_annual_h(series, freq=None, dt_index=False):
+ """
+ Group a series by years, taking leap years into account.
+
+ The output has as many rows as distinct years in the original series,
+ and as many columns as the length of a leap year in the units corresponding
+ to the original frequency (366 for daily frequency, 366*24 for hourly...).
+ The fist column of the output corresponds to Jan. 1st, 00:00:00,
+ while the last column corresponds to Dec, 31st, 23:59:59.
+ Entries corresponding to Feb. 29th are masked for non-leap years.
+
+ For example, if the initial series has a daily frequency, the 59th column
+ of the output always corresponds to Feb. 28th, the 61st column to Mar. 1st,
+ and the 60th column is masked for non-leap years.
+ With a hourly initial frequency, the (59*24)th column of the output always
+ correspond to Feb. 28th 23:00, the (61*24)th column to Mar. 1st, 00:00, and
+ the 24 columns between (59*24) and (61*24) are masked.
+
+ If the original frequency is less than daily, the output is equivalent to
+ ``series.convert('A', func=None)``.
+
+ Parameters
+ ----------
+ series : TimeSeries
+ freq : string or None, default None
+
+ Returns
+ -------
+ annual : DataFrame
+
+
+ """
+ #TODO: test like original pandas and the position of first and last value in arrays
+ #TODO: reduce number of hardcoded values scattered all around.
+ index = series.index
+ year = index.year
+ years = nanops.unique1d(year)
+
+ if freq is not None:
+ freq = freq.upper()
+ else:
+ freq = series.index.freq
+
+ if freq == 'H':
+
+ ##basics
+
+ #integer value of sum of all hours in a leap hear
+ total_hoy_leap = (year_length(series.index.freqstr))
+
+ #list of all hours in a leap year
+ hoy_leap_list = range(1, (total_hoy_leap + 1 ))
+
+
+
+ values = np.empty((total_hoy_leap, len(years)), dtype=series.dtype)
+ values.fill(np.nan)
+
+ dummy_df = DataFrame(values, index=hoy_leap_list,
+ columns=years)
+
+ ##get offset for leap hours
+
+ #see:
+ #http://stackoverflow.com/questions/2004364/increment-numpy-array-with-repeated-indices
+ #1994-02-28 23:00:00 -> index 1415
+ ind_z = np.array(range(0, 8760))
+ ind_i = np.array(range(1416,8760 ))
+
+ ind_t = ind_z.copy()
+ ind_t[ind_i]+=24
+
+ #TODO: beautify variable names
+ for year in years:
+
+ # select data for the respective year
+ ser_sel = series[ series.index.year == year]
+ info = (ser_sel).values
+
+
+
+ if isleapyear(year):
+ dummy_df[year] = info
+ else:
+ data = np.empty((total_hoy_leap), dtype=series.dtype)
+ data.fill(np.nan)
+
+ ser_sel = series[ series.index.year == year]
+ info = (ser_sel).values
+
+ data.put(ind_t, (series[ series.index.year == year]).values)
+
+ dummy_df[year] = data
+
+ res_df = dummy_df
+
+ #assign a datetime index, CAUTION: the year is definatly wrong!
+ if dt_index:
+ rng = default_rng()
+ res_df = DataFrame(res_df.values, index=rng,
+ columns=res_df.columns)
+
+ return res_df
+
+#TDOO: use pivot_annual for D & M and minute in the same fashion
+ if freq == 'D':
+ raise NotImplementedError(freq), "use pandas.tseries.util.pivot_annual"
+
+ if freq == 'M':
+ raise NotImplementedError(freq), "use pandas.tseries.util.pivot_annual"
+
+ else:
+ raise NotImplementedError(freq)
+
+
+ return res_df
+
+
+### timeseries pivoting helper
+
+def last_col2front(df, col_no=1):
+ """shifts the last column of a data frame to the front
+
+ increase col_no to shift more cols
+ """
+ cols = cols = df.columns.tolist()
+ #increase index value to 2+ if more columns are to be shifted
+ cols = cols[-col_no:] + cols[:-col_no]
+ df = df[cols]
+
+ return df
+
+
+def extended_info(df, time_cols=True, aggreg=True, aggreg_func=None,
+ datetime_index=False):
+ """add extended information to a timeseries pivot
+ """
+
+ df_extended = df.copy()
+ #perform the following only on the data columns
+ cols = df_extended.columns
+ #TODO: add standard aggregation
+ #TODO: make function be set by argument
+ #TODO: is there no a SM describe function?
+ #TODO: Maybe use http://pandas.pydata.org/pandas-docs/dev/basics.html#summarizing-data-describe
+ if aggreg:
+
+ df_extended['mean'] = df_extended[cols].mean(1)
+ df_extended['sum'] = df_extended[cols].sum(1)
+ df_extended['min'] = df_extended[cols].min(1)
+ df_extended['max'] = df_extended[cols].max(1)
+ df_extended['std'] = df_extended[cols].std(1)
+
+ #add some metadata
+ #TODO: add function to make index a datetime with the argument above using the rng below
+ #TODO: convert the range to lower frequencies and reuse the function.
+ rng = default_rng()
+ df_extended['doy'] = rng.dayofyear
+# df_extended = last_col2front(df_extended)
+ df_extended['month'] = rng.month
+# df_extended = last_col2front(df_extended)
+ df_extended['day'] = rng.day
+# df_extended = last_col2front(df_extended)
+ df_extended['hour'] = rng.hour + 1
+ df_extended = last_col2front(df_extended, col_no=4)
+
+ return df_extended
+
+###Timeseries convenience / helper functions
+
+
+def year_length(freq, leap=True):
+ """helper function for year length at different frequencies.
+ to be expanded
+ """
+
+ daysofyear_leap = 366
+ daysofyear_nonleap = 365
+
+ if freq == 'H':
+ if leap:
+ length = 24 * daysofyear_leap
+ else:
+ length = 24 * daysofyear_nonleap
+
+ return length
+
+def default_rng(freq='H', leap=True):
+ """create default ranges
+ """
+
+ if leap:
+ total_hoy_leap = (year_length(freq='H'))
+ rng = date_range('1/1/2012', periods=total_hoy_leap, freq='H')
+
+ else:
+ total_hoy_nonleap = (year_length(freq='H'))
+ rng = date_range('1/1/2011', periods=total_hoy_nonleap, freq='H')
+
+ return rng
diff --git a/pandas/tseries/tests/test_util.py b/pandas/tseries/tests/test_util.py
index 1b634d2e4bf24..2548714fe76ec 100644
--- a/pandas/tseries/tests/test_util.py
+++ b/pandas/tseries/tests/test_util.py
@@ -10,6 +10,75 @@
from pandas.tseries.tools import normalize_date
from pandas.tseries.util import pivot_annual, isleapyear
+from pandas.tseries import pivot
+
+
+class TestPivotAnnualHourly(unittest.TestCase):
+ """
+ New pandas of scikits.timeseries pivot_annual for hourly with a new shape
+ """
+ def test_hourly(self):
+ rng_hourly = date_range('1/1/1994', periods=(18* 8760 + 4*24), freq='H')
+ data_hourly = np.random.randint(100, high=350, size=rng_hourly.size)
+ data_hourly = data_hourly.astype('float64')
+ ts_hourly = Series(data_hourly, index=rng_hourly)
+
+ annual = pivot.pivot_annual_h(ts_hourly, dt_index=True)
+
+ ### general
+ ##test first column: if first value and data are the same as first value of timeseries
+ #date
+ def get_mdh(DatetimeIndex, index):
+ #(m, d, h)
+ mdh_tuple = (DatetimeIndex.month[index], DatetimeIndex.day[index],
+ DatetimeIndex.hour[index])
+ return mdh_tuple
+# ts_hourly.index.month[1], ts_hourly.index.month[1], ts_hourly.index.month[1]
+
+ assert get_mdh(ts_hourly.index, 1) == get_mdh(annual.index, 1)
+ #are the last dates of ts identical with the dates last row in the last column?
+ assert get_mdh(ts_hourly.index, -1) == get_mdh(annual.index, (annual.index.size -1))
+ #first values of the ts identical with the first col?
+ assert ts_hourly[0] == annual.ix[0].values[0]
+ #last values of the ts identical with the last col and last row of the df?
+ assert ts_hourly[-1] == annual.ix[-1].values[-1]
+ #### index
+ ##test if index has the right length
+ assert annual.index.size == 8784
+ ##test last column: if first value and data are the same as first value of timeseries
+ ### leap
+ ##test leap offset
+ #leap year: 1996 - are the values of the ts and the
+ ser96_leap = ts_hourly[(ts_hourly.index.year == 1996) &
+ (ts_hourly.index.month == 2) &
+ (ts_hourly.index.day == 29)
+ ]
+
+ df96 = annual[1996]
+ df96_leap = df96[(df96.index.month == 2) & (df96.index.day == 29)]
+ np.testing.assert_equal(ser96_leap.values, df96_leap.values)
+ #non-leap year: 1994 - are all values NaN for day 29.02?
+ nan_arr = np.empty(24)
+ nan_arr.fill(np.nan)
+ df94 = annual[1994]
+ df94_noleap = df94[(df94.index.month == 2) & (df94.index.day == 29)]
+ np.testing.assert_equal(df94_noleap.values, nan_arr)
+ ### extended functionaliy
+ ext = pivot.extended_info(annual)
+ ## descriptive statistics
+ #mean
+ np.testing.assert_equal(annual.mean(1).values, ext['mean'].values)
+ np.testing.assert_equal(annual.sum(1).values, ext['sum'].values)
+ np.testing.assert_equal(annual.min(1).values, ext['min'].values)
+ np.testing.assert_equal(annual.max(1).values, ext['max'].values)
+ np.testing.assert_equal(annual.std(1).values, ext['std'].values)
+
+ ## additional time columns for easier filtering
+ np.testing.assert_equal(ext['doy'].values, annual.index.dayofyear)
+ np.testing.assert_equal(ext['day'].values, annual.index.day)
+ #the hour is incremented by 1
+ np.testing.assert_equal(ext['hour'].values, (annual.index.hour +1))
+
class TestPivotAnnual(unittest.TestCase):
"""
@@ -36,6 +105,7 @@ def test_daily(self):
leaps.index = leaps.index.year
tm.assert_series_equal(annual[day].dropna(), leaps)
+
def test_weekly(self):
pass
diff --git a/pandas/tseries/util.py b/pandas/tseries/util.py
index 0702bc40389c9..853784f6ece4e 100644
--- a/pandas/tseries/util.py
+++ b/pandas/tseries/util.py
@@ -2,6 +2,7 @@
from pandas.core.frame import DataFrame
import pandas.core.nanops as nanops
+from pandas.tseries.util import isleapyear
def pivot_annual(series, freq=None):
| Now I figured out.
Here is the PR for #2153
| https://api.github.com/repos/pandas-dev/pandas/pulls/2154 | 2012-11-01T17:31:51Z | 2012-11-05T23:25:14Z | null | 2013-12-04T00:57:52Z |
BUG: Timestamp.astimezone returns datetime.datetime #2060 | diff --git a/pandas/src/datetime.pyx b/pandas/src/datetime.pyx
index 1fe36d95014b6..bdfa04eaca441 100644
--- a/pandas/src/datetime.pyx
+++ b/pandas/src/datetime.pyx
@@ -238,6 +238,8 @@ class Timestamp(_Timestamp):
# Same UTC timestamp, different time zone
return Timestamp(self.value, tz=tz)
+ astimezone = tz_convert
+
def replace(self, **kwds):
return Timestamp(datetime.replace(self, **kwds),
offset=self.offset)
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index d7413111a7142..ee14b2ccfd603 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -137,6 +137,13 @@ def test_tz_localize_dti(self):
freq='L')
self.assertRaises(pytz.NonExistentTimeError, dti.tz_localize, 'US/Eastern')
+ def test_astimezone(self):
+ utc = Timestamp('3/11/2012 22:00', tz='UTC')
+ expected = utc.tz_convert('US/Eastern')
+ result = utc.astimezone('US/Eastern')
+ self.assertEquals(expected, result)
+ self.assert_(isinstance(result, Timestamp))
+
def test_create_with_tz(self):
stamp = Timestamp('3/11/2012 05:00', tz='US/Eastern')
self.assertEquals(stamp.hour, 5)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2152 | 2012-11-01T16:53:37Z | 2012-11-02T16:46:09Z | null | 2012-11-02T16:46:09Z | |
ENH: DataFrame where and mask #2109 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0f270c6f6e546..c689b62432847 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4832,6 +4832,49 @@ def combineMult(self, other):
"""
return self.mul(other, fill_value=1.)
+ def where(self, cond, other):
+ """
+ Return a DataFrame with the same shape as self and whose corresponding
+ entries are from self where cond is True and otherwise are from other.
+
+
+ Parameters
+ ----------
+ cond: boolean DataFrame or array
+ other: scalar or DataFrame
+
+ Returns
+ -------
+ wh: DataFrame
+ """
+ if isinstance(cond, np.ndarray):
+ if cond.shape != self.shape:
+ raise ValueError('Array onditional must be same shape as self')
+ cond = self._constructor(cond, index=self.index, columns=self.columns)
+ if cond.shape != self.shape:
+ cond = cond.reindex(self.index, columns=self.columns)
+ cond = cond.fillna(False)
+
+ if isinstance(other, DataFrame):
+ _, other = self.align(other, join='left', fill_value=np.nan)
+
+ rs = np.where(cond, self, other)
+ return self._constructor(rs, self.index, self.columns)
+
+ def mask(self, cond):
+ """
+ Returns copy of self whose values are replaced with nan if the
+ corresponding entry in cond is False
+
+ Parameters
+ ----------
+ cond: boolean DataFrame or array
+
+ Returns
+ -------
+ wh: DataFrame
+ """
+ return self.where(cond, np.nan)
_EMPTY_SERIES = Series([])
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c989e8c981231..3cea6c50f40f3 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -5063,6 +5063,34 @@ def test_align_int_fill_bug(self):
expected = df2 - df2.mean()
assert_frame_equal(result, expected)
+ def test_where(self):
+ df = DataFrame(np.random.randn(5, 3))
+ cond = df > 0
+
+ other1 = df + 1
+ rs = df.where(cond, other1)
+ for k, v in rs.iteritems():
+ assert_series_equal(v, np.where(cond[k], df[k], other1[k]))
+
+ other2 = (df + 1).values
+ rs = df.where(cond, other2)
+ for k, v in rs.iteritems():
+ assert_series_equal(v, np.where(cond[k], df[k], other2[:, k]))
+
+ other5 = np.nan
+ rs = df.where(cond, other5)
+ for k, v in rs.iteritems():
+ assert_series_equal(v, np.where(cond[k], df[k], other5))
+
+ assert_frame_equal(rs, df.mask(cond))
+
+ err1 = (df + 1).values[0:2, :]
+ self.assertRaises(ValueError, df.where, cond, err1)
+
+ err2 = cond.ix[:2, :].values
+ self.assertRaises(ValueError, df.where, err2, other1)
+
+
#----------------------------------------------------------------------
# Transposing
| #2109
| https://api.github.com/repos/pandas-dev/pandas/pulls/2151 | 2012-11-01T16:38:32Z | 2012-11-02T18:08:57Z | null | 2014-06-13T08:10:02Z |
ENH: at_time and between_time for DataFrame #2131 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 97b10e532c9de..31c1c2a638376 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -156,6 +156,48 @@ def asfreq(self, freq, method=None, how=None):
from pandas.tseries.resample import asfreq
return asfreq(self, freq, method=method, how=how)
+ def at_time(self, time, asof=False):
+ """
+ Select values at particular time of day (e.g. 9:30AM)
+
+ Parameters
+ ----------
+ time : datetime.time or string
+
+ Returns
+ -------
+ values_at_time : type of caller
+ """
+ try:
+ indexer = self.index.indexer_at_time(time, asof=asof)
+ return self.take(indexer)
+ except AttributeError:
+ raise TypeError('Index must be DatetimeIndex')
+
+ def between_time(self, start_time, end_time, include_start=True,
+ include_end=True):
+ """
+ Select values between particular times of the day (e.g., 9:00-9:30 AM)
+
+ Parameters
+ ----------
+ start_time : datetime.time or string
+ end_time : datetime.time or string
+ include_start : boolean, default True
+ include_end : boolean, default True
+
+ Returns
+ -------
+ values_between_time : type of caller
+ """
+ try:
+ indexer = self.index.indexer_between_time(
+ start_time, end_time, include_start=include_start,
+ include_end=include_end)
+ return self.take(indexer)
+ except AttributeError:
+ raise TypeError('Index must be DatetimeIndex')
+
def resample(self, rule, how=None, axis=0, fill_method=None,
closed='right', label='right', convention=None,
kind=None, loffset=None, limit=None, base=0):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 7a7fc7159ecb4..b9c2e4c1cbb81 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2921,42 +2921,6 @@ def _repr_footer(self):
namestr = "Name: %s, " % str(self.name) if self.name is not None else ""
return '%s%sLength: %d' % (freqstr, namestr, len(self))
- def at_time(self, time, asof=False):
- """
- Select values at particular time of day (e.g. 9:30AM)
-
- Parameters
- ----------
- time : datetime.time or string
-
- Returns
- -------
- values_at_time : TimeSeries
- """
- indexer = self.index.indexer_at_time(time, asof=asof)
- return self.take(indexer)
-
- def between_time(self, start_time, end_time, include_start=True,
- include_end=True):
- """
- Select values between particular times of the day (e.g., 9:00-9:30 AM)
-
- Parameters
- ----------
- start_time : datetime.time or string
- end_time : datetime.time or string
- include_start : boolean, default True
- include_end : boolean, default True
-
- Returns
- -------
- values_between_time : TimeSeries
- """
- indexer = self.index.indexer_between_time(
- start_time, end_time, include_start=include_start,
- include_end=include_end)
- return self.take(indexer)
-
def to_timestamp(self, freq=None, how='start', copy=True):
"""
Cast to datetimeindex of timestamps, at *beginning* of period
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 7507f3e4d66ee..19aa11baf2422 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -853,6 +853,36 @@ def test_at_time(self):
rs = ts.at_time('16:00')
self.assert_(len(rs) == 0)
+ def test_at_time_frame(self):
+ rng = date_range('1/1/2000', '1/5/2000', freq='5min')
+ ts = DataFrame(np.random.randn(len(rng), 2), index=rng)
+ rs = ts.at_time(rng[1])
+ self.assert_((rs.index.hour == rng[1].hour).all())
+ self.assert_((rs.index.minute == rng[1].minute).all())
+ self.assert_((rs.index.second == rng[1].second).all())
+
+ result = ts.at_time('9:30')
+ expected = ts.at_time(time(9, 30))
+ assert_frame_equal(result, expected)
+
+ result = ts.ix[time(9, 30)]
+ expected = ts.ix[(rng.hour == 9) & (rng.minute == 30)]
+
+ assert_frame_equal(result, expected)
+
+ # midnight, everything
+ rng = date_range('1/1/2000', '1/31/2000')
+ ts = DataFrame(np.random.randn(len(rng), 3), index=rng)
+
+ result = ts.at_time(time(0, 0))
+ assert_frame_equal(result, ts)
+
+ # time doesn't exist
+ rng = date_range('1/1/2012', freq='23Min', periods=384)
+ ts = DataFrame(np.random.randn(len(rng), 2), rng)
+ rs = ts.at_time('16:00')
+ self.assert_(len(rs) == 0)
+
def test_between_time(self):
rng = date_range('1/1/2000', '1/5/2000', freq='5min')
ts = Series(np.random.randn(len(rng)), index=rng)
@@ -913,6 +943,66 @@ def test_between_time(self):
else:
self.assert_((t < etime) or (t >= stime))
+ def test_between_time_frame(self):
+ rng = date_range('1/1/2000', '1/5/2000', freq='5min')
+ ts = DataFrame(np.random.randn(len(rng), 2), index=rng)
+ stime = time(0, 0)
+ etime = time(1, 0)
+
+ close_open = itertools.product([True, False], [True, False])
+ for inc_start, inc_end in close_open:
+ filtered = ts.between_time(stime, etime, inc_start, inc_end)
+ exp_len = 13 * 4 + 1
+ if not inc_start:
+ exp_len -= 5
+ if not inc_end:
+ exp_len -= 4
+
+ self.assert_(len(filtered) == exp_len)
+ for rs in filtered.index:
+ t = rs.time()
+ if inc_start:
+ self.assert_(t >= stime)
+ else:
+ self.assert_(t > stime)
+
+ if inc_end:
+ self.assert_(t <= etime)
+ else:
+ self.assert_(t < etime)
+
+ result = ts.between_time('00:00', '01:00')
+ expected = ts.between_time(stime, etime)
+ assert_frame_equal(result, expected)
+
+ #across midnight
+ rng = date_range('1/1/2000', '1/5/2000', freq='5min')
+ ts = DataFrame(np.random.randn(len(rng), 2), index=rng)
+ stime = time(22, 0)
+ etime = time(9, 0)
+
+ close_open = itertools.product([True, False], [True, False])
+ for inc_start, inc_end in close_open:
+ filtered = ts.between_time(stime, etime, inc_start, inc_end)
+ exp_len = (12 * 11 + 1) * 4 + 1
+ if not inc_start:
+ exp_len -= 4
+ if not inc_end:
+ exp_len -= 4
+
+ self.assert_(len(filtered) == exp_len)
+ for rs in filtered.index:
+ t = rs.time()
+ if inc_start:
+ self.assert_((t >= stime) or (t <= etime))
+ else:
+ self.assert_((t > stime) or (t <= etime))
+
+ if inc_end:
+ self.assert_((t <= etime) or (t >= stime))
+ else:
+ self.assert_((t < etime) or (t >= stime))
+
def test_dti_constructor_preserve_dti_freq(self):
rng = date_range('1/1/2000', '1/2/2000', freq='5min')
| pushed up to generic, raises TypeError if index doesn't have indexer_at_time
| https://api.github.com/repos/pandas-dev/pandas/pulls/2149 | 2012-11-01T15:03:23Z | 2012-11-02T18:17:32Z | null | 2014-06-17T14:25:08Z |
BUG: fix output formatting, zero trimming of floats with exp, close #212... | diff --git a/pandas/core/format.py b/pandas/core/format.py
index c38e34141d67f..1f435f2e459e7 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -887,14 +887,14 @@ def just(x):
def _trim_zeros(str_floats, na_rep='NaN'):
"""
- Trims zeros and decimal points
+ Trims zeros and decimal points.
"""
- # TODO: what if exponential?
trimmed = str_floats
def _cond(values):
non_na = [x for x in values if x != na_rep]
- return len(non_na) > 0 and all([x.endswith('0') for x in non_na])
+ return (len(non_na) > 0 and all([x.endswith('0') for x in non_na]) and
+ not(any([('e' in x) or ('E' in x) for x in non_na])))
while _cond(trimmed):
trimmed = [x[:-1] if x != na_rep else x for x in trimmed]
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 5f5dd666e9042..049b7b32fdd00 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -777,6 +777,14 @@ def test_to_html_with_classes(self):
result = df.to_html(classes=["sortable", "draggable"])
self.assertEqual(result, expected)
+ def test_float_trim_zeros(self):
+ vals = [2.08430917305e+10, 3.52205017305e+10, 2.30674817305e+10,
+ 2.03954217305e+10, 5.59897817305e+10]
+ skip = True
+ for line in repr(DataFrame({'A': vals})).split('\n'):
+ self.assert_(('+10' in line) or skip)
+ skip = False
+
class TestSeriesFormatting(unittest.TestCase):
@@ -860,6 +868,13 @@ def test_unicode_name_in_footer(self):
sf=fmt.SeriesFormatter(s,name=u'\u05e2\u05d1\u05e8\u05d9\u05ea')
sf._get_footer() # should not raise exception
+ def test_float_trim_zeros(self):
+ vals = [2.08430917305e+10, 3.52205017305e+10, 2.30674817305e+10,
+ 2.03954217305e+10, 5.59897817305e+10]
+ for line in repr(Series(vals)).split('\n'):
+ self.assert_('+10' in line)
+
+
class TestEngFormatter(unittest.TestCase):
def test_eng_float_formatter(self):
| ...0
implement + test proposed fix for #2120
| https://api.github.com/repos/pandas-dev/pandas/pulls/2135 | 2012-10-26T19:54:30Z | 2012-10-31T20:23:11Z | null | 2012-10-31T20:23:11Z |
BUG: fix unicode handling in plot label, close #2080 | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 34754a23ba5b4..caa685340c0e5 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -902,7 +902,7 @@ def _maybe_add_color(kwargs, style, i):
_maybe_add_color(kwds, style, i)
- label = com.pprint_thing(label).encode('utf-8')
+ label = com.pprint_thing(label)
mask = com.isnull(y)
if mask.any():
| #2080
| https://api.github.com/repos/pandas-dev/pandas/pulls/2122 | 2012-10-25T17:32:59Z | 2012-10-31T20:23:52Z | null | 2012-11-02T22:06:26Z |
BUG: setup.py error message lacks cache_dir argument | diff --git a/setup.py b/setup.py
index 4189c3803b241..38dffdffdbdf0 100755
--- a/setup.py
+++ b/setup.py
@@ -341,7 +341,7 @@ def __init__(self,*args,**kwds):
cache_dir=kwds.pop("cache_dir",BUILD_CACHE_DIR)
self.cache_dir=cache_dir
if not os.path.isdir(cache_dir):
- raise Exception("Error: path to Cache directory [%s] is not a dir");
+ raise Exception("Error: path to Cache directory (%s) is not a dir" % cache_dir);
def _copy_from_cache(self,hash,target):
src=os.path.join(self.cache_dir,hash)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2111 | 2012-10-24T14:32:46Z | 2012-10-31T20:24:54Z | null | 2012-10-31T20:24:54Z | |
Unicode III : revenge of the character planes | diff --git a/pandas/core/encoding.py b/pandas/core/encoding.py
new file mode 100644
index 0000000000000..50a732d3d081d
--- /dev/null
+++ b/pandas/core/encoding.py
@@ -0,0 +1,298 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# Mutable sequences handling? specificaly tuples.
+# generators - via wrapper?
+
+import unittest
+
+from pandas.util import py3compat
+from pandas.core.common import PandasError
+import numpy as np
+import sys
+
+
+try:
+ next
+except NameError: # pragma: no cover
+ # Python < 2.6
+ def next(x):
+ return x.next()
+
+# this should live in some package-wide conf object
+input_encoding='utf-8'
+perform_conversion=True
+guess_enc_on_decode_failure=True
+guess_enc_min_char_count=100
+guess_enc_max_iter=5000
+guess_enc_min_confidence=0.8
+
+def set_input_encoding(encoding):
+ global input_encoding
+ input_encoding=encoding
+
+def _should_process(obj):
+ """
+ A predicate function which determines whether obj should
+ be processed for byte-string conversion based on it's type.
+
+ Parameters
+ ----------
+ obj - any object
+
+ Returns
+ -------
+ bool - True if the object should be processed
+ """
+
+ # pd.Index* are isinstance(np.ndarray) but should be excluded
+ # because their constructors call decode directly.
+ #
+ return perform_conversion and \
+ ( isinstance(obj,(list,dict)) or \
+ type(obj) == np.ndarray or \
+ type(obj) == np.void )
+
+def _can_import(name):
+ """
+ Returns True if the named module/package can be imported"
+
+ Parameters
+ ----------
+ `name` - package / module name.
+
+ Returns
+ -------
+ bool - True if `name` can be imported.
+
+ """
+ try:
+ __import__(name)
+ return True
+ except ImportError:
+ return False
+
+def _decode_obj(obj, encoding):
+ """
+ Recieves an object, and decodes any non-ascii byte-strings found
+ to unicode using the given encoding.
+
+ You should use `decode_catch_errors` to get friendly error messages
+ when decoding fails.
+
+ supports str,unicode, and mutable sequences
+ This function iterates over `seq`, decoding any byte-string object found using the
+ given `encoding`.
+
+ supports str/unicode and mutable sequences as input, all others
+ are returned as-is (including generators, for now)
+
+ Handles arbitrarily nested sequences.
+
+ Parameters
+ ----------
+ `obj` - any object.
+
+ Returns
+ -------
+ result - seq with all non-ascii bytestring decoded into utf-8
+
+ Raises
+ ------
+ UnicodeDecodeError - if decoding with the given encoding fails
+ """
+
+ import types
+ def _dec_str(s,encoding=encoding):
+ try:
+ s.encode('ascii') # if it's ascii leave it alone
+ except UnicodeDecodeError:
+ s = s.decode(encoding) # if not, convert to unicode
+ # might raise another UnicodeDecodeError - handled by the caller
+ return s
+
+ def _dec_seq(seq):
+ if isinstance(seq, dict):
+ for k in seq.keys(): # grab the list of keys before we do mutation
+ v=seq[k]
+ if isinstance(k, str):
+ k = _dec_str(k)
+ elif _should_process(k): # keys are immutable, need this?
+ k = (yield _dec_seq(k))
+
+ if isinstance(v, str):
+ v = _dec_str(v)
+ elif _should_process(v):
+ v = (yield _dec_seq(v))
+
+ seq.pop(k) # need this
+ seq[k] = v
+
+ else:
+ for i,e in enumerate(seq):
+ if isinstance(e, str):
+ seq[i] = _dec_str(e)
+ elif _should_process(e):
+ (yield _dec_seq(e))
+
+ yield seq
+
+ if py3compat.PY3:
+ return obj
+
+ if isinstance(obj, basestring): # strings are simple
+ if isinstance(obj, str):
+ obj=_dec_str(obj)
+ return obj
+
+ if not _should_process(obj): # misc. objects are too
+ return obj
+
+ s = [_dec_seq(obj)]
+ values = []
+ while True: # others - not so much, let's see what we can do.
+ g = s.pop()
+ if values:
+ e = g.send(values.pop())
+ else:
+ e = next(g)
+ if type(e) == types.GeneratorType:
+ s.extend([g, e])
+ else:
+ if s:
+ values.append(e)
+ else:
+ return e
+
+def _extract_txt_from_obj(obj,max_iter=sys.maxint):
+ """
+ a generator which walks `obj`, yielding any byte-string found
+
+ will stop after at most `max_iter` iterations.
+
+ Parameters
+ ----------
+ `obj` - any iterable
+
+ Yields
+ -------
+ byte-strings.
+
+ Raises
+ ------
+ StopIteration - when there are no more byte-strings in the sequence
+
+ """
+
+ if obj is None or isinstance(obj,basestring):
+ if isinstance(obj,unicode):
+ return
+ yield obj
+ return
+
+ s = [iter(obj)]
+ cnt=0
+ while s:
+ g = s.pop()
+ for e in g:
+ cnt+=1
+ if isinstance(e, str):
+ yield e
+ elif isinstance(e, dict):
+ s.extend([g, e.iterkeys(), e.itervalues()])
+ elif _should_process(e):
+ s.append(g, iter(e))
+
+ if cnt >= max_iter:
+ return
+
+def _detect_encoding(obj,min_cnt=guess_enc_min_char_count,max_iter=guess_enc_max_iter):
+ """
+ extracts byte-string from obj via `_extract_txt_from_obj` and uses
+ the `chardet` package to detect the encoding used.
+
+ Can handle nested sequences, also looks at dict keys and values.
+
+ Parameters
+ ----------
+ `obj` - input string or sequence
+
+ `min_cnt` - specifies the minimum amount of characters which must be fed to to
+ the detector before we allow a decision.
+
+ `max_iter` - an upper bound on the number of elements examined in the sequence
+ looking for text.
+ This guards against the corner-case of a huge list with a decoding error only
+ near it's end.
+
+ Returns
+ -------
+ `result` - {'encoding': str, 'confidence': float}, or
+ {} if no encoding was found.
+ """
+ if not _can_import("chardet"):
+ return {}
+
+ from chardet.universaldetector import UniversalDetector
+ detector = UniversalDetector()
+ cnt = 0 # keep track of number of characters processed
+ for txt in _extract_txt_from_obj(obj,max_iter):
+ cnt += len(txt)
+ detector.feed(txt)
+ if (cnt > min_cnt and detector.done) :
+ break
+ detector.close()
+ res=detector.result
+ if res and res['confidence'] > guess_enc_min_confidence\
+ and cnt > min_cnt:
+ return detector.result
+ else:
+ return {}
+
+def decode_catch_errors(obj, encoding=None):
+ """
+ Delegates to `_decode_obj` in order to convert byte-string within obj into
+ unicode when necessary. If a decode error occurs, prints a user friendly
+ error message, and if the chardet library is available will try to give
+ the user a good guesss about the encoding used by extracting text from `obj`
+
+ Parameters
+ ----------
+ `obj` - anything
+ encoding - an acceptable encoding to be passed to str.decode()
+
+ Returns
+ -------
+ `result` - `obj` with byte-strings decoded into unicode strings
+
+ Raises
+ ------
+ `PandasError(msg)` - with msg being a friendly error message to the user
+ """
+
+ try:
+ encoding = encoding or input_encoding
+ return _decode_obj(obj, encoding)
+ except UnicodeDecodeError:
+ from textwrap import dedent
+ msg = \
+ """
+ The input Data contains strings that cannot be decoded with `%s`.
+ You should specify a correct encoding to the object constructor,
+ or set the value of the default input encoding in XXX.
+ """
+
+ s = dedent(msg) % encoding
+ if guess_enc_on_decode_failure:
+ if not _can_import("chardet"):
+ s += 'The "chardet" package is not installed - ' +\
+ "can't suggest an encoding."
+ else:
+ det_enc=_detect_encoding(obj)
+ if det_enc:
+ conf = det_enc['confidence']
+ enc = det_enc['encoding']
+ s += 'You might try "%s" as the encoding (Confidence: %2.1f)'\
+ % (enc, conf)
+
+ raise PandasError(s)
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 7125feeeb3b1c..4f6f3a16c8441 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -237,23 +237,12 @@ def _to_str_columns(self, force_unicode=False):
if not py3compat.PY3:
if force_unicode:
- def make_unicode(x):
- if isinstance(x, unicode):
- return x
- return x.decode('utf-8')
- strcols = map(lambda col: map(make_unicode, col), strcols)
+ strcols = map(lambda col: map(unicode, col), strcols)
else:
- # generally everything is plain strings, which has ascii
- # encoding. problem is when there is a char with value over 127
- # - everything then gets converted to unicode.
try:
map(lambda col: map(str, col), strcols)
except UnicodeError:
- def make_unicode(x):
- if isinstance(x, unicode):
- return x
- return x.decode('utf-8')
- strcols = map(lambda col: map(make_unicode, col), strcols)
+ strcols = map(lambda col: map(unicode, col), strcols)
return strcols
@@ -1121,6 +1110,8 @@ def reset(self):
def _put_lines(buf, lines):
+ # handles #891 where ascii and unicode fields are mixed
+ # but will fail if encoded bytesting +unicode fields are mixed
if any(isinstance(x, unicode) for x in lines):
lines = [unicode(x) for x in lines]
buf.write('\n'.join(lines))
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e20aba116ef04..6d4f0ca73be11 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -37,6 +37,7 @@
from pandas.util.decorators import deprecate, Appender, Substitution
from pandas.tseries.period import PeriodIndex
+import pandas.core.encoding as en
import pandas.core.algorithms as algos
import pandas.core.datetools as datetools
@@ -366,6 +367,10 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
if data is None:
data = {}
+ columns= en.decode_catch_errors(columns)
+ index= en.decode_catch_errors(index)
+ data= en.decode_catch_errors(data)
+
if isinstance(data, DataFrame):
data = data._data
@@ -950,7 +955,7 @@ def from_items(cls, items, columns=None, orient='columns'):
@classmethod
def from_csv(cls, path, header=0, sep=',', index_col=0,
- parse_dates=True, encoding=None):
+ parse_dates=True, encoding='utf-8'):
"""
Read delimited file into DataFrame
@@ -1675,6 +1680,8 @@ def iget_value(self, i, j):
def __getitem__(self, key):
# slice rows
+ key=en.decode_catch_errors(key)
+
if isinstance(key, slice):
from pandas.core.indexing import _is_index_slice
idx_type = self.index.inferred_type
@@ -1793,6 +1800,9 @@ def __setattr__(self, name, value):
def __setitem__(self, key, value):
# support boolean setting with DataFrame input, e.g.
# df[df > df2] = 0
+ value=en.decode_catch_errors(value)
+ key=en.decode_catch_errors(key)
+
if isinstance(key, DataFrame):
if not (key.index.equals(self.index) and
key.columns.equals(self.columns)):
@@ -1972,6 +1982,9 @@ def xs(self, key, axis=0, level=None, copy=True):
-------
xs : Series or DataFrame
"""
+
+ key=en.decode_catch_errors(key)
+
labels = self._get_axis(axis)
if level is not None:
loc, new_ax = labels.get_loc_level(key, level=level)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 08d1c593d42ca..01854f5475183 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -5,8 +5,8 @@
from itertools import izip
import numpy as np
-
-from pandas.core.common import ndtake
+import pandas.core.encoding as en
+from pandas.core.common import ndtake,_is_sequence
from pandas.util.decorators import cache_readonly
import pandas.core.common as com
import pandas.lib as lib
@@ -80,6 +80,8 @@ class Index(np.ndarray):
_engine_type = lib.ObjectEngine
def __new__(cls, data, dtype=None, copy=False, name=None):
+ data= en.decode_catch_errors(data)
+
if isinstance(data, np.ndarray):
if issubclass(data.dtype.type, np.datetime64):
from pandas.tseries.index import DatetimeIndex
@@ -305,12 +307,15 @@ def __contains__(self, key):
def __hash__(self):
return hash(self.view(np.ndarray))
- def __setitem__(self, key, value):
- """Disable the setting of values."""
- raise Exception(str(self.__class__) + ' object is immutable')
+ def __getattribute__(self,name):
+ if name=="__setitem__": # emulate an Immutable ndarray
+ raise AttributeError(str(self.__class__) + ' object is immutable')
+ else:
+ return object.__getattribute__(self,name)
def __getitem__(self, key):
"""Override numpy.ndarray's __getitem__ method to work as desired"""
+ key=en.decode_catch_errors(key)
arr_idx = self.view(np.ndarray)
if np.isscalar(key):
return arr_idx[key]
@@ -1163,6 +1168,8 @@ class Int64Index(Index):
_engine_type = lib.Int64Engine
def __new__(cls, data, dtype=None, copy=False, name=None):
+ data= en.decode_catch_errors(data)
+
if not isinstance(data, np.ndarray):
if np.isscalar(data):
raise ValueError('Index(...) must be called with a collection '
@@ -1252,6 +1259,10 @@ class MultiIndex(Index):
names = None
def __new__(cls, levels=None, labels=None, sortorder=None, names=None):
+ levels= en.decode_catch_errors(levels)
+ labels= en.decode_catch_errors(labels)
+ names= en.decode_catch_errors(names)
+
assert(len(levels) == len(labels))
if len(levels) == 0:
raise Exception('Must pass non-zero number of levels/labels')
@@ -1634,6 +1645,7 @@ def __setstate__(self, state):
self.sortorder = sortorder
def __getitem__(self, key):
+ key=en.decode_catch_errors(key)
if np.isscalar(key):
return tuple(lev[lab[key]]
for lev, lab in zip(self.levels, self.labels))
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 211434ab07154..dd6d5351fff38 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -21,7 +21,7 @@
import pandas.core.common as com
import pandas.core.nanops as nanops
import pandas.lib as lib
-
+import pandas.core.encoding as en
def _ensure_like_indices(time, panels):
"""
@@ -76,6 +76,10 @@ def panel_index(time, panels, names=['time', 'panel']):
(1961, 'B'), (1961, 'C'), (1962, 'A'), (1962, 'B'),
(1962, 'C')], dtype=object)
"""
+
+ panels= en.decode_catch_errors(panels)
+ names= en.decode_catch_errors(names)
+
time, panels = _ensure_like_indices(time, panels)
time_factor = Factor.from_array(time)
panel_factor = Factor.from_array(panels)
@@ -208,6 +212,9 @@ def __init__(self, data=None, items=None, major_axis=None, minor_axis=None,
if data is None:
data = {}
+ data = en.decode_catch_errors(data)
+ items = en.decode_catch_errors(items)
+
passed_axes = [items, major_axis, minor_axis]
axes = None
if isinstance(data, BlockManager):
@@ -566,6 +573,9 @@ def _box_item_values(self, key, values):
def __getattr__(self, name):
"""After regular attribute access, try looking up the name of an item.
This allows simpler access to items for interactive use."""
+
+ name = en.decode_catch_errors(name)
+
if name in self.items:
return self[name]
raise AttributeError("'%s' object has no attribute '%s'" %
@@ -577,6 +587,11 @@ def _slice(self, slobj, axis=0):
def __setitem__(self, key, value):
_, N, K = self.shape
+
+
+ key = en.decode_catch_errors(key)
+ value = en.decode_catch_errors(value)
+
if isinstance(value, DataFrame):
value = value.reindex(index=self.major_axis,
columns=self.minor_axis)
@@ -1463,4 +1478,3 @@ def complete_dataframe(obj, prev_completions):
install_ipython_completers()
except Exception:
pass
-
diff --git a/pandas/core/series.py b/pandas/core/series.py
index eca177c4c543b..c16ebe7e74d2e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -34,6 +34,8 @@
from pandas.compat.scipy import scoreatpercentile as _quantile
+import pandas.core.encoding as en
+
__all__ = ['Series', 'TimeSeries']
_np_version = np.version.short_version
@@ -300,6 +302,10 @@ def __new__(cls, data=None, index=None, dtype=None, name=None,
if data is None:
data = {}
+ index= en.decode_catch_errors(index)
+ data= en.decode_catch_errors(data)
+ name= en.decode_catch_errors(name)
+
if index is not None:
index = _ensure_index(index)
@@ -460,6 +466,7 @@ def ix(self):
return self._ix
def __getitem__(self, key):
+ key=en.decode_catch_errors(key)
try:
return self.index.get_value(self, key)
except InvalidIndexError:
@@ -557,6 +564,9 @@ def _get_values(self, indexer):
return self.values[indexer]
def __setitem__(self, key, value):
+ value=en.decode_catch_errors(value)
+ key=en.decode_catch_errors(key)
+
try:
try:
self.index._engine.set_value(self, key, value)
@@ -1153,7 +1163,7 @@ def max(self, axis=None, out=None, skipna=True, level=None):
@Substitution(name='standard deviation', shortname='stdev',
na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
+ @Appender(_stat_doc +
"""
Normalized by N-1 (unbiased estimator).
""")
@@ -1166,7 +1176,7 @@ def std(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
@Substitution(name='variance', shortname='var',
na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
+ @Appender(_stat_doc +
"""
Normalized by N-1 (unbiased estimator).
""")
diff --git a/pandas/tests/test_encoding.py b/pandas/tests/test_encoding.py
new file mode 100644
index 0000000000000..89e4612f59c27
--- /dev/null
+++ b/pandas/tests/test_encoding.py
@@ -0,0 +1,148 @@
+import unittest
+import nose
+import pandas.core.encoding as en
+from pandas.util import py3compat
+
+try:
+ next
+except NameError: # pragma: no cover
+ # Python < 2.6
+ def next(x):
+ return x.next()
+
+class TestEncoding(unittest.TestCase):
+ def setUp(self):
+ self.u = u"\u03c3"
+ self.u_e = self.u.encode('utf-8')
+ self.seq =self.assertEqual
+
+ def tearDown(self):
+ pass
+
+ def test_decode(self):
+ if py3compat.PY3:
+ raise nose.SkipTest()
+
+ self.seq(en.decode_catch_errors([]),[])
+ self.seq(en.decode_catch_errors([1,2.5,True]),[1,2.5,True])
+ self.seq(en.decode_catch_errors([u"1","2",3]), [u'1','2', 3])
+ self.seq(en.decode_catch_errors([self.u,2,3]),[self.u, 2, 3])
+ self.seq(en.decode_catch_errors([self.u_e,"a"]),[self.u, 'a'])
+
+ def test_decode_with_nested(self):
+ if py3compat.PY3:
+ raise nose.SkipTest()
+
+ self.seq(en.decode_catch_errors([self.u_e,["a"],u"a"]),[self.u, ['a'], u'a']) # ascii left alone
+
+
+ def test_decode_with_immutable_seq(self):
+ if py3compat.PY3:
+ raise nose.SkipTest()
+
+ # mutables are not altered
+ self.assertTrue(en.decode_catch_errors((self.u_e,))==(self.u_e,))
+ self.assertTrue(en.decode_catch_errors(["abc",(u"abc",)])==["abc", (u"abc",)]) # mutables not converted
+
+ def test_decode_with_nested_and_dicts(self):
+ if py3compat.PY3:
+ raise nose.SkipTest()
+
+ self.seq(en.decode_catch_errors({"a":"b"}), {u'a': u'b'})
+
+ r=[u'a',u'b', 1, 2.5, True, {u'a': u'b'},
+ [u'a', u'b', 1, 2.5, True, {u'a': u'b'},
+ [u'a', u'b', 1, 2.5, True, {u'a': u'b'}],
+ [u'a', u'b', 1, 2.5, True, {u'a': u'b'}]]]
+
+ self.seq(en.decode_catch_errors(["a",u"b",1,2.5,True,{"a":"b"},
+ ["a",u"b",1,2.5,True,{"a":"b"},
+ ["a",u"b",1,2.5,True,{"a":"b"}],
+ ["a",u"b",1,2.5,True,{"a":"b"}]]]),r)
+
+ r= [{"k": [self.u, [self.u,1], u'b']}]
+ self.seq(en.decode_catch_errors([{"k":[self.u_e,[self.u_e,1],u"b"]}]),r)
+
+ def test_decode_non_seq(self):
+ self.seq(en.decode_catch_errors("abcd"),"abcd")
+ self.seq(en.decode_catch_errors(u"abcd"),u"abcd")
+
+ def test_extract_text(self):
+ if py3compat.PY3:
+ raise nose.SkipTest()
+
+ # test with self.seq, pure str, pure unicode
+ g=en._extract_txt_from_obj(u"abcd")
+
+ try:
+ next(g)
+ except StopIteration:
+ pass
+ else:
+ self.fail("erroneous yield")
+ # self.assertRaises(StopIteration,next(g))
+
+ g=en._extract_txt_from_obj("abcd")
+ self.seq(next(g),"abcd")
+
+ g=en._extract_txt_from_obj("\xcc")
+ self.seq(next(g),"\xcc")
+
+ g=en._extract_txt_from_obj(["abcd","\xcc"])
+ self.seq(next(g),"abcd")
+ self.seq(next(g),"\xcc")
+
+ def test_recursion_limit_safe(self):
+ "Test against recursion limit"
+ import sys
+
+ a=["a"]
+ for i in range(sys.getrecursionlimit()+1):
+ a=["a",a]
+
+ try:
+ en.decode_catch_errors(a)
+ except RuntimeError:
+ self.fail("en.decode_self.seq() Implementation cannot handle deeply-nested self.sequences")
+
+ def test_ordered_dict_key_ordering(self):
+ "Test That OrderedDicts keep their key ordering"
+ import string,random,sys
+
+ if sys.version_info[:2]<(2,7):
+ raise nose.SkipTest
+
+ from collections import OrderedDict
+ self.seq=self.assertEqual
+
+ for i in range(100):
+ keys=[string.ascii_letters[random.randint(1,20)] for x in range(20)]
+ d=OrderedDict.fromkeys(keys)
+ # after decoding, is the order of keys is maintained?
+ self.seq( en.decode_catch_errors([d])[0].keys(),map(unicode,d.keys()))
+
+ def test_detect_text_enc(self):
+ import string
+ if en._can_import("chardet"):
+ res=en._detect_encoding(string.ascii_letters,min_cnt=10)
+ self.assertTrue(isinstance(res,dict))
+ self.assertTrue('confidence' in res and 'encoding' in res) # keys in result dict
+ res=en._detect_encoding("a") # not enough confidence, return empty
+ self.assertTrue(res=={})
+
+ def test_detector_detects_enc(self):
+ s='\xf9\xec\xe5\xed \xf8\xe1 \xf9\xe5\xe1\xea'+\
+ '\xf6\xe9\xf4\xe5\xf8\xe4 \xf0\xe7\xee\xe3\xfa'
+
+ if en._can_import("chardet"):
+ res=en._detect_encoding(s,min_cnt=0)
+ self.assertTrue(isinstance(res,dict))
+ self.assertTrue('confidence' in res and 'encoding' in res) # keys in result dict
+ self.assertEqual(res['encoding'],"windows-1255") # keys in result dict
+
+
+ def test_text_extract_limit_iter(self):
+ if en._can_import("chardet"):
+ seq=["a","a","b"]
+ for x in en._extract_txt_from_obj(seq,2):
+ self.assertNotEqual(x,"b")
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 4ea62d695042a..c8182feb436f6 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -143,11 +143,6 @@ def test_to_string_unicode_two(self):
buf = StringIO()
dm.to_string(buf)
- def test_to_string_unicode_three(self):
- dm = DataFrame(['\xc2'])
- buf = StringIO()
- dm.to_string(buf)
-
def test_to_string_with_formatters(self):
df = DataFrame({'int': [1, 2, 3],
'float': [1.0, 2.0, 3.0],
@@ -709,7 +704,7 @@ def test_to_html_with_classes(self):
<table border="1" class="dataframe sortable draggable">
<tbody>
<tr>
- <td>Index([], dtype=object)</td>
+ <td>Index((), dtype=object)</td>
<td>Empty DataFrame</td>
</tr>
</tbody>
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 3a2005fff0ba5..56cd507057cea 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -57,8 +57,7 @@ def test_sort(self):
self.assertRaises(Exception, self.strIndex.sort)
def test_mutability(self):
- self.assertRaises(Exception, self.strIndex.__setitem__, 5, 0)
- self.assertRaises(Exception, self.strIndex.__setitem__, slice(1,5), 0)
+ self.assertFalse(hasattr(self.strIndex,"__setitem__"))
def test_constructor(self):
# regular instance creation
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index efaef5f37a60d..a0d813e245ce9 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -676,7 +676,7 @@ def test_match_findall_flags(self):
self.assertEquals(result[0], True)
def test_encode_decode(self):
- base = Series([u'a', u'b', u'\xe4'])
+ base = Series([u'a', u'b'])
series = base.str.encode('utf-8')
f = lambda x: x.decode('utf-8')
| This is the 3rd installment in the unicode saga, and once again,
there's a lot of issues to point out. fair warning given.
#1994 cleaned up things to use basestring and unicode() over str() where needed
#2005 consolidated and implemented unicode-friendly pretty-printing
The attempt here is to ensure that the internal representation of strings is
always a unicode object, as is the case by default in python3.
**It's not finished**, but I'd like to get feedback.
## A.Implementation
1. core.encoding implements the functionality for traversing sequences and decoding
strings into unicode objects inplace. this includes key/values of dicts (one
of the many supported input structures for constructing pandas data objects).
2. pure ascii strings are left untouched.
3. For strings containing unicode characters, we do actually change the input,
but since getters are symmetricly handled, the result should be seamless (if you
set a key using an encoded bytestrings,invoking a getter with the same key should
get you back the data, even though internally, the data is stored under the equiv.
unicode key).
4. this decoding step is placed in "choke-points" such as constructors, getters and
setters. I've probably missed a few at this point.
5. the encoding to be used for decoding is specified in a configurable (which will
be settable by the user, possibly via the mechanism in #2097.
6. if decoding fails, the user gets a message explaining what happend. if the `chardet`
package is available, the code tries to suggest an encoding to the user based
on the data. `chardet` has enough false-positives to make automatic-detection
a bad idea.
7. as part of the unicode-or-bust theme, the csv reader now uses utf-8 by default,
which forces the use of UnicodeReader, so you either get unicode or a decode error,
forcing the user to specify an encoding.
## B. Benefits
1. Making internal representations unicode should help reduce the whack-a-mole
which has been going on.
2. Once things are guranteed unicode, lib code can stop checking for and handling
corner-cases.
3. Mixing unicode and encoded byte-strings, or bytestrings encoded with
different encodings can create insoluable situations.
A related example is `fmt._put_lines()`, which was altered to fix #891 so
it would handle the case of mixing pure ascii with unicode.However, it now fails
when unicode is mixed with encoded byte-strings (which makes `repr_html()` fail,
yielding sometimes-html-sometimes-text repr confusion). This PR fixes that problem
amongst other things.
4. since python encodes input from the console using the console encoding, we can
convert things to unicode transparently for the user, so the behaviour becomes
closer to that of python3 - enter a string at the console, and you get unicode by
default. (no need for unicode literal).
## C. Disadvantages
1. It's hackish.
2. There's a performance hit (@wesm, what benchmarks would you like to see?)
3. Immutable sequences are currently not handled. not just because of copying, but
also because an experiment converting tuples raised all sorts of mysterious
problems, in cython code amongst other things.
4. Probably difficult to cover every possible entry-point for bytes-strings.
5. Not sure about c-parser compat. issues.
## D. Points to consider (feedback welcome)
1. there will be a configurable to turn all of this on or off. Not sure what the
default should be though.
2. Should the encoding used for decoding be specifiable per-object? or just
a global configurable available to the user.
3. what's the best way to deal with Immutable sequences?
The configurables will be sorted out later (Pending #2097?).
| https://api.github.com/repos/pandas-dev/pandas/pulls/2104 | 2012-10-22T18:27:53Z | 2012-10-28T10:42:42Z | null | 2014-06-23T04:22:12Z |
PR: adding a core.config module to hold package-wide configurables | diff --git a/pandas/__init__.py b/pandas/__init__.py
index 3760e3fbc434b..df37b44cc6a7d 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -22,6 +22,9 @@
from pandas.version import version as __version__
from pandas.info import __doc__
+# let init-time option registration happen
+import pandas.core.config_init
+
from pandas.core.api import *
from pandas.sparse.api import *
from pandas.stats.api import *
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 8cf3b7f4cbda4..469f3683113ec 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -29,3 +29,6 @@
# legacy
from pandas.core.daterange import DateRange # deprecated
import pandas.core.datetools as datetools
+
+from pandas.core.config import get_option,set_option,reset_option,\
+ reset_options,describe_options
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 02223b05fc2f9..c86ee34f26d47 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -19,6 +19,8 @@
from pandas.util.py3compat import StringIO, BytesIO
+from pandas.core.config import get_option
+
# XXX: HACK for NumPy 1.5.1 to suppress warnings
try:
np.seterr(all='ignore')
@@ -1113,7 +1115,7 @@ def in_interactive_session():
# 2) If you need to send something to the console, use console_encode().
#
# console_encode() should (hopefully) choose the right encoding for you
-# based on the encoding set in fmt.print_config.encoding.
+# based on the encoding set in option "print_config.encoding"
#
# 3) if you need to write something out to file, use
# pprint_thing_encoded(encoding).
@@ -1165,16 +1167,17 @@ def pprint_thing(thing, _nest_lvl=0):
result - unicode object on py2, str on py3. Always Unicode.
"""
- from pandas.core.format import print_config
+
if thing is None:
result = ''
elif (py3compat.PY3 and hasattr(thing,'__next__')) or \
hasattr(thing,'next'):
return unicode(thing)
elif (isinstance(thing, dict) and
- _nest_lvl < print_config.pprint_nest_depth):
+ _nest_lvl < get_option("print_config.pprint_nest_depth")):
result = _pprint_dict(thing, _nest_lvl)
- elif _is_sequence(thing) and _nest_lvl < print_config.pprint_nest_depth:
+ elif _is_sequence(thing) and _nest_lvl < \
+ get_option("print_config.pprint_nest_depth"):
result = _pprint_seq(thing, _nest_lvl)
else:
# when used internally in the package, everything
@@ -1202,7 +1205,6 @@ def pprint_thing_encoded(object, encoding='utf-8', errors='replace'):
def console_encode(object):
- from pandas.core.format import print_config
"""
this is the sanctioned way to prepare something for
sending *to the console*, it delegates to pprint_thing() to get
@@ -1210,4 +1212,5 @@ def console_encode(object):
set in print_config.encoding. Use this everywhere
where you output to the console.
"""
- return pprint_thing_encoded(object, print_config.encoding)
+ return pprint_thing_encoded(object,
+ get_option("print_config.encoding"))
diff --git a/pandas/core/config.py b/pandas/core/config.py
new file mode 100644
index 0000000000000..09c1a5f37383d
--- /dev/null
+++ b/pandas/core/config.py
@@ -0,0 +1,497 @@
+"""
+The config module holds package-wide configurables and provides
+a uniform API for working with them.
+"""
+
+"""
+Overview
+========
+
+This module supports the following requirements:
+- options are referenced using keys in dot.notation, e.g. "x.y.option - z".
+- options can be registered by modules at import time.
+- options can be registered at init-time (via core.config_init)
+- options have a default value, and (optionally) a description and
+ validation function associated with them.
+- options can be deprecated, in which case referencing them
+ should produce a warning.
+- deprecated options can optionally be rerouted to a replacement
+ so that accessing a deprecated option reroutes to a differently
+ named option.
+- options can be reset to their default value.
+- all option can be reset to their default value at once.
+- all options in a certain sub - namespace can be reset at once.
+- the user can set / get / reset or ask for the description of an option.
+- a developer can register and mark an option as deprecated.
+
+Implementation
+==============
+
+- Data is stored using nested dictionaries, and should be accessed
+ through the provided API.
+
+- "Registered options" and "Deprecated options" have metadata associcated
+ with them, which are stored in auxilary dictionaries keyed on the
+ fully-qualified key, e.g. "x.y.z.option".
+
+- the config_init module is imported by the package's __init__.py file.
+ placing any register_option() calls there will ensure those options
+ are available as soon as pandas is loaded. If you use register_option
+ in a module, it will only be available after that module is imported,
+ which you should be aware of.
+
+- `config_prefix` is a context_manager (for use with the `with` keyword)
+ which can save developers some typing, see the docstring.
+
+"""
+
+import re
+
+from collections import namedtuple
+import warnings
+
+DeprecatedOption = namedtuple("DeprecatedOption", "key msg rkey removal_ver")
+RegisteredOption = namedtuple("RegisteredOption", "key defval doc validator")
+
+__deprecated_options = {} # holds deprecated option metdata
+__registered_options = {} # holds registered option metdata
+__global_config = {} # holds the current values for registered options
+
+##########################################
+# User API
+
+
+def get_option(key):
+ """Retrieves the value of the specified option
+
+ Parameters
+ ----------
+ key - str, a fully - qualified option name , e.g. "x.y.z.option"
+
+ Returns
+ -------
+ result - the value of the option
+
+ Raises
+ ------
+ KeyError if no such option exists
+ """
+
+ _warn_if_deprecated(key)
+ key = _translate_key(key)
+
+ # walk the nested dict
+ root, k = _get_root(key)
+
+ return root[k]
+
+
+def set_option(key, value):
+ """Sets the value of the specified option
+
+ Parameters
+ ----------
+ key - str, a fully - qualified option name , e.g. "x.y.z.option"
+
+ Returns
+ -------
+ None
+
+ Raises
+ ------
+ KeyError if no such option exists
+ """
+ _warn_if_deprecated(key)
+ key = _translate_key(key)
+
+ o = _get_registered_option(key)
+ if o and o.validator:
+ o.validator(value)
+
+ # walk the nested dict
+ root, k = _get_root(key)
+
+ root[k] = value
+
+
+def _get_option_desription(key):
+ """Prints the description associated with the specified option
+
+ Parameters
+ ----------
+ key - str, a fully - qualified option name , e.g. "x.y.z.option"
+
+ Returns
+ -------
+ None
+
+ Raises
+ ------
+ KeyError if no such option exists
+ """
+ _warn_if_deprecated(key)
+ key = _translate_key(key)
+
+def describe_options(pat="",_print_desc=True):
+ """ Prints the description for one or more registered options
+
+ Parameters
+ ----------
+ pat - str, a regexp pattern. All matching keys will have their
+ description displayed.
+
+ _print_desc - if True (default) the description(s) will be printed
+ to stdout otherwise, the description(s) will be returned
+ as a unicode string (for testing).
+
+ Returns
+ -------
+ None by default, the description(s) as a unicode string if _print_desc
+ is False
+
+ """
+ s=u""
+ if pat in __registered_options.keys(): # exact key name?
+ s = _build_option_description(pat)
+ else:
+ for k in sorted(__registered_options.keys()): # filter by pat
+ if re.search(pat,k):
+ s += _build_option_description(k)
+
+ if s == u"":
+ raise KeyError("No such keys(s)")
+
+ if _print_desc:
+ print(s)
+ else:
+ return(s)
+
+def reset_option(key):
+ """ Reset a single option to it's default value """
+ set_option(key, __registered_options[key].defval)
+
+
+def reset_options(prefix=""):
+ """ Resets all registered options to their default value
+
+ Parameters
+ ----------
+ prefix - str, if specified only options matching `prefix`* will be reset
+
+ Returns
+ -------
+ None
+
+ """
+
+ for k in __registered_options.keys():
+ if k[:len(prefix)] == prefix:
+ reset_option(k)
+
+
+######################################################
+# Functions for use by pandas developers, in addition to User - api
+
+
+def register_option(key, defval, doc="", validator=None):
+ """Register an option in the package-wide pandas config object
+
+ Parameters
+ ----------
+ key - a fully-qualified key, e.g. "x.y.option - z".
+ defval - the default value of the option
+ doc - a string description of the option
+ validator - a function of a single argument, should raise `ValueError` if
+ called with a value which is not a legal value for the option.
+
+ Returns
+ -------
+ Nothing.
+
+ Raises
+ ------
+ ValueError if `validator` is specified and `defval` is not a valid value.
+
+ """
+
+
+ if key in __registered_options:
+ raise KeyError("Option '%s' has already been registered" % key)
+
+ # the default value should be legal
+ if validator:
+ validator(defval)
+
+ # walk the nested dict, creating dicts as needed along the path
+ path = key.split(".")
+ cursor = __global_config
+ for i,p in enumerate(path[:-1]):
+ if not isinstance(cursor,dict):
+ raise KeyError("Path prefix to option '%s' is already an option" %\
+ ".".join(path[:i]))
+ if not cursor.has_key(p):
+ cursor[p] = {}
+ cursor = cursor[p]
+
+ if not isinstance(cursor,dict):
+ raise KeyError("Path prefix to option '%s' is already an option" %\
+ ".".join(path[:-1]))
+
+ cursor[path[-1]] = defval # initialize
+
+ # save the option metadata
+ __registered_options[key] = RegisteredOption(key=key, defval=defval,
+ doc=doc, validator=validator)
+
+
+def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
+ """
+ Mark option `key` as deprecated, if code attempts to access this option,
+ a warning will be produced, using `msg` if given, or a default message
+ if not.
+ if `rkey` is given, any access to the key will be re-routed to `rkey`.
+
+ Neither the existence of `key` nor that if `rkey` is checked. If they
+ do not exist, any subsequence access will fail as usual, after the
+ deprecation warning is given.
+
+ Parameters
+ ----------
+ key - the name of the option to be deprecated. must be a fully-qualified
+ option name (e.g "x.y.z.rkey").
+
+ msg - (Optional) a warning message to output when the key is referenced.
+ if no message is given a default message will be emitted.
+
+ rkey - (Optional) the name of an option to reroute access to.
+ If specified, any referenced `key` will be re-routed to `rkey`
+ including set/get/reset.
+ rkey must be a fully-qualified option name (e.g "x.y.z.rkey").
+ used by the default message if no `msg` is specified.
+
+ removal_ver - (Optional) specifies the version in which this option will
+ be removed. used by the default message if no `msg`
+ is specified.
+
+ Returns
+ -------
+ Nothing
+
+ Raises
+ ------
+ KeyError - if key has already been deprecated.
+
+ """
+ if key in __deprecated_options:
+ raise KeyError("Option '%s' has already been defined as deprecated." % key)
+
+ __deprecated_options[key] = DeprecatedOption(key, msg, rkey,removal_ver)
+
+################################
+# functions internal to the module
+
+
+def _get_root(key):
+ path = key.split(".")
+ cursor = __global_config
+ for p in path[:-1]:
+ cursor = cursor[p]
+ return cursor, path[-1]
+
+
+def _is_deprecated(key):
+ """ Returns True if the given option has been deprecated """
+ return __deprecated_options.has_key(key)
+
+
+def _get_deprecated_option(key):
+ """
+ Retrieves the metadata for a deprecated option, if `key` is deprecated.
+
+ Returns
+ -------
+ DeprecatedOption (namedtuple) if key is deprecated, None otherwise
+ """
+ try:
+ d = __deprecated_options[key]
+ except KeyError:
+ return None
+ else:
+ return d
+
+
+def _get_registered_option(key):
+ """
+ Retrieves the option metadata if `key` is a registered option.
+
+ Returns
+ -------
+ RegisteredOption (namedtuple) if key is deprecated, None otherwise
+ """
+ try:
+ d = __registered_options[key]
+ except KeyError:
+ return None
+ else:
+ return d
+
+
+def _translate_key(key):
+ """
+ if key id deprecated and a replacement key defined, will return the
+ replacement key, otherwise returns `key` as - is
+ """
+ d = _get_deprecated_option(key)
+ if d:
+ return d.rkey or key
+ else:
+ return key
+
+
+def _warn_if_deprecated(key):
+ """
+ Checks if `key` is a deprecated option and if so, prints a warning.
+
+ Returns
+ -------
+ bool - True if `key` is deprecated, False otherwise.
+ """
+ d = _get_deprecated_option(key)
+ if d:
+ if d.msg:
+ warnings.warn(d.msg, DeprecationWarning)
+ else:
+ msg = "'%s' is deprecated" % key
+ if d.removal_ver:
+ msg += " and will be removed in %s" % d.removal_ver
+ if d.rkey:
+ msg += (", please use '%s' instead." % (d.rkey))
+ else:
+ msg += (", please refrain from using it.")
+
+ warnings.warn(msg, DeprecationWarning)
+ return True
+ return False
+
+def _build_option_description(k):
+ """ Builds a formatted description of a registered option and prints it """
+
+ o = _get_registered_option(k)
+ d = _get_deprecated_option(k)
+ s = u'%s: ' %k
+ if o.doc:
+ s += "\n" +"\n ".join(o.doc.split("\n"))
+ else:
+ s += "No description available.\n"
+
+ if d:
+ s += u"\n\t(Deprecated"
+ s += u", use `%s` instead." % d.rkey if d.rkey else ""
+ s += u")\n"
+
+ s += "\n"
+ return(s)
+
+
+##############
+# helpers
+
+from contextlib import contextmanager
+
+
+@contextmanager
+def config_prefix(prefix):
+ """contextmanager for multiple invocations of API with a common prefix
+
+ supported API functions: (register / get / set )__option
+
+ Warning: This is not thread - safe, and won't work properly if you import
+ the API functions into your module using the "from x import y" construct.
+
+ Example:
+
+ import pandas.core.config as cf
+ with cf.config_prefix("display.font"):
+ cf.register_option("color", "red")
+ cf.register_option("size", " 5 pt")
+ cf.set_option(size, " 6 pt")
+ cf.get_option(size)
+ ...
+
+ etc'
+
+ will register options "display.font.color", "display.font.size", set the
+ value of "display.font.size"... and so on.
+ """
+ # Note: reset_option relies on set_option, and on key directly
+ # it does not fit in to this monkey-patching scheme
+
+ global register_option, get_option, set_option, reset_option
+
+ def wrap(func):
+ def inner(key, *args, **kwds):
+ pkey="%s.%s" % (prefix, key)
+ return func(pkey, *args, **kwds)
+ return inner
+
+ _register_option = register_option
+ _get_option = get_option
+ _set_option = set_option
+ set_option = wrap(set_option)
+ get_option = wrap(get_option)
+ register_option = wrap(register_option)
+ yield
+ set_option = _set_option
+ get_option = _get_option
+ register_option = _register_option
+
+
+# These factories and methods are handy for use as the validator
+# arg in register_option
+def is_type_factory(_type):
+ """
+
+ Parameters
+ ----------
+ `_type` - a type to be compared against (e.g. type(x) == `_type`)
+
+ Returns
+ -------
+ validator - a function of a single argument x , which returns the
+ True if type(x) is equal to `_type`
+
+ """
+ def inner(x):
+ if type(x) != _type:
+ raise ValueError("Value must have type '%s'" % str(_type))
+
+ return inner
+
+
+def is_instance_factory(_type):
+ """
+
+ Parameters
+ ----------
+ `_type` - the type to be checked against
+
+ Returns
+ -------
+ validator - a function of a single argument x , which returns the
+ True if x is an instance of `_type`
+
+ """
+ def inner(x):
+ if not isinstance(x, _type):
+ raise ValueError("Value must be an instance of '%s'" % str(_type))
+
+ return inner
+
+# common type validators, for convenience
+# usage: register_option(... , validator = is_int)
+is_int = is_type_factory(int)
+is_bool = is_type_factory(bool)
+is_float = is_type_factory(float)
+is_str = is_type_factory(str)
+is_unicode = is_type_factory(unicode)
+is_text = is_instance_factory(basestring)
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
new file mode 100644
index 0000000000000..b0279a94983d9
--- /dev/null
+++ b/pandas/core/config_init.py
@@ -0,0 +1,103 @@
+from __future__ import with_statement # support python 2.5
+
+import pandas.core.config as cf
+from pandas.core.config import is_int,is_bool,is_text,is_float
+from pandas.core.format import detect_console_encoding
+
+"""
+This module is imported from the pandas package __init__.py file
+in order to ensure that the core.config options registered here will
+be available as soon as the user loads the package. if register_option
+is invoked inside specific modules, they will not be registered until that
+module is imported, which may or may not be a problem.
+
+If you need to make sure options are available even before a certain
+module is imported, register them here rather then in the module.
+
+"""
+
+
+###########################################
+# options from the "print_config" namespace
+
+pc_precision_doc="""
+: int
+ Floating point output precision (number of significant digits). This is
+ only a suggestion
+"""
+
+pc_colspace_doc="""
+: int
+ Default space for DataFrame columns, defaults to 12
+"""
+
+pc_max_rows_doc="""
+: int
+"""
+
+pc_max_cols_doc="""
+: int
+ max_rows and max_columns are used in __repr__() methods to decide if
+ to_string() or info() is used to render an object to a string.
+ Either one, or both can be set to 0 (experimental). Pandas will figure
+ out how big the terminal is and will not display more rows or/and
+ columns that can fit on it.
+"""
+
+pc_nb_repr_h_doc="""
+: boolean
+ When True (default), IPython notebook will use html representation for
+ pandas objects (if it is available).
+"""
+
+pc_date_dayfirst_doc="""
+: boolean
+ When True, prints and parses dates with the day first, eg 20/01/2005
+"""
+
+pc_date_yearfirst_doc="""
+: boolean
+ When True, prints and parses dates with the year first, eg 2005/01/20
+"""
+
+pc_pprint_nest_depth="""
+: int
+ Defaults to 3.
+ Controls the number of nested levels to process when pretty-printing
+"""
+
+pc_multi_sparse_doc="""
+: boolean
+ Default True, "sparsify" MultiIndex display (don't display repeated
+ elements in outer levels within groups)
+"""
+
+pc_encoding_doc="""
+: str/unicode
+ Defaults to the detected encoding of the console.
+ Specifies the encoding to be used for strings returned by to_string,
+ these are generally strings meant to be displayed on the console.
+"""
+
+with cf.config_prefix('print_config'):
+ cf.register_option('precision', 7, pc_precision_doc, validator=is_int)
+ cf.register_option('digits', 7, validator=is_int)
+ cf.register_option('float_format', None)
+ cf.register_option('column_space', 12, validator=is_int)
+ cf.register_option('max_rows', 200, pc_max_rows_doc, validator=is_int)
+ cf.register_option('max_colwidth', 50, validator=is_int)
+ cf.register_option('max_columns', 0, pc_max_cols_doc, validator=is_int)
+ cf.register_option('colheader_justify', 'right',
+ validator=is_text)
+ cf.register_option('notebook_repr_html', True, pc_nb_repr_h_doc,
+ validator=is_bool)
+ cf.register_option('date_dayfirst', False, pc_date_dayfirst_doc,
+ validator=is_bool)
+ cf.register_option('date_yearfirst', False, pc_date_yearfirst_doc,
+ validator=is_bool)
+ cf.register_option('pprint_nest_depth', 3, pc_pprint_nest_depth,
+ validator=is_int)
+ cf.register_option('multi_sparse', True, pc_multi_sparse_doc,
+ validator=is_bool)
+ cf.register_option('encoding', detect_console_encoding(), pc_encoding_doc,
+ validator=is_text)
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 4230f3c19aba6..0a91be9908172 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -11,7 +11,8 @@
from pandas.core.common import adjoin, isnull, notnull
from pandas.core.index import MultiIndex, _ensure_index
from pandas.util import py3compat
-
+from pandas.core.config import get_option, set_option, \
+ reset_options
import pandas.core.common as com
import pandas.lib as lib
@@ -69,7 +70,7 @@ def __init__(self, series, buf=None, header=True, length=True,
self.header = header
if float_format is None:
- float_format = print_config.float_format
+ float_format = get_option("print_config.float_format")
self.float_format = float_format
def _get_footer(self):
@@ -145,11 +146,11 @@ def to_string(self):
_strlen = len
else:
def _encode_diff(x):
- return len(x) - len(x.decode(print_config.encoding))
+ return len(x) - len(x.decode(get_option("print_config.encoding")))
def _strlen(x):
try:
- return len(x.decode(print_config.encoding))
+ return len(x.decode(get_option("print_config.encoding")))
except UnicodeError:
return len(x)
@@ -176,7 +177,7 @@ def __init__(self, frame, buf=None, columns=None, col_space=None,
self.show_index_names = index_names
if sparsify is None:
- sparsify = print_config.multi_sparse
+ sparsify = get_option("print_config.multi_sparse")
self.sparsify = sparsify
@@ -188,7 +189,7 @@ def __init__(self, frame, buf=None, columns=None, col_space=None,
self.index = index
if justify is None:
- self.justify = print_config.colheader_justify
+ self.justify = get_option("print_config.colheader_justify")
else:
self.justify = justify
@@ -697,13 +698,13 @@ def format_array(values, formatter, float_format=None, na_rep='NaN',
fmt_klass = GenericArrayFormatter
if space is None:
- space = print_config.column_space
+ space = get_option("print_config.column_space")
if float_format is None:
- float_format = print_config.float_format
+ float_format = get_option("print_config.float_format")
if digits is None:
- digits = print_config.precision
+ digits = get_option("print_config.precision")
fmt_obj = fmt_klass(values, digits, na_rep=na_rep,
float_format=float_format,
@@ -739,9 +740,9 @@ def _have_unicode(self):
def _format_strings(self, use_unicode=False):
if self.float_format is None:
- float_format = print_config.float_format
+ float_format = get_option("print_config.float_format")
if float_format is None:
- fmt_str = '%% .%dg' % print_config.precision
+ fmt_str = '%% .%dg' % get_option("print_config.precision")
float_format = lambda x: fmt_str % x
else:
float_format = self.float_format
@@ -863,7 +864,7 @@ def _make_fixed_width(strings, justify='right', minimum=None):
if minimum is not None:
max_len = max(minimum, max_len)
- conf_max = print_config.max_colwidth
+ conf_max = get_option("print_config.max_colwidth")
if conf_max is not None and max_len > conf_max:
max_len = conf_max
@@ -941,7 +942,7 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
max_columns=None, colheader_justify=None,
max_colwidth=None, notebook_repr_html=None,
date_dayfirst=None, date_yearfirst=None,
- multi_sparse=None, encoding=None):
+ pprint_nest_depth=None,multi_sparse=None, encoding=None):
"""
Alter default behavior of DataFrame.toString
@@ -965,37 +966,65 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
When True, prints and parses dates with the day first, eg 20/01/2005
date_yearfirst : boolean
When True, prints and parses dates with the year first, eg 2005/01/20
+ pprint_nest_depth : int
+ Defaults to 3.
+ Controls the number of nested levels to process when pretty-printing
+ nested sequences.
multi_sparse : boolean
Default True, "sparsify" MultiIndex display (don't display repeated
elements in outer levels within groups)
"""
if precision is not None:
- print_config.precision = precision
+ set_option("print_config.precision", precision)
if column_space is not None:
- print_config.column_space = column_space
+ set_option("print_config.column_space", column_space)
if max_rows is not None:
- print_config.max_rows = max_rows
+ set_option("print_config.max_rows", max_rows)
if max_colwidth is not None:
- print_config.max_colwidth = max_colwidth
+ set_option("print_config.max_colwidth", max_colwidth)
if max_columns is not None:
- print_config.max_columns = max_columns
+ set_option("print_config.max_columns", max_columns)
if colheader_justify is not None:
- print_config.colheader_justify = colheader_justify
+ set_option("print_config.colheader_justify", colheader_justify)
if notebook_repr_html is not None:
- print_config.notebook_repr_html = notebook_repr_html
+ set_option("print_config.notebook_repr_html", notebook_repr_html)
if date_dayfirst is not None:
- print_config.date_dayfirst = date_dayfirst
+ set_option("print_config.date_dayfirst", date_dayfirst)
if date_yearfirst is not None:
- print_config.date_yearfirst = date_yearfirst
+ set_option("print_config.date_yearfirst", date_yearfirst)
+ if pprint_nest_depth is not None:
+ set_option("print_config.pprint_nest_depth", pprint_nest_depth)
if multi_sparse is not None:
- print_config.multi_sparse = multi_sparse
+ set_option("print_config.multi_sparse", multi_sparse)
if encoding is not None:
- print_config.encoding = encoding
-
+ set_option("print_config.encoding", encoding)
def reset_printoptions():
- print_config.reset()
+ reset_options("print_config.")
+
+def detect_console_encoding():
+ """
+ Try to find the most capable encoding supported by the console.
+ slighly modified from the way IPython handles the same issue.
+ """
+ import locale
+
+ encoding = None
+ try:
+ encoding=sys.stdin.encoding
+ except AttributeError:
+ pass
+ if not encoding or encoding =='ascii': # try again for something better
+ try:
+ encoding = locale.getpreferredencoding()
+ except Exception:
+ pass
+
+ if not encoding: # when all else fails. this will usually be "ascii"
+ encoding = sys.getdefaultencoding()
+
+ return encoding
class EngFormatter(object):
"""
@@ -1103,59 +1132,8 @@ def set_eng_float_format(precision=None, accuracy=3, use_eng_prefix=False):
"being renamed to 'accuracy'", FutureWarning)
accuracy = precision
- print_config.float_format = EngFormatter(accuracy, use_eng_prefix)
- print_config.column_space = max(12, accuracy + 9)
-
-
-class _GlobalPrintConfig(object):
- """
- Holds the console formatting settings for DataFrame and friends
- """
-
- def __init__(self):
- self.precision = self.digits = 7
- self.float_format = None
- self.column_space = 12
- self.max_rows = 200
- self.max_colwidth = 50
- self.max_columns = 0
- self.colheader_justify = 'right'
- self.notebook_repr_html = True
- self.date_dayfirst = False
- self.date_yearfirst = False
- self.pprint_nest_depth = 3
- self.multi_sparse = True
- self.encoding = self.detect_encoding()
-
- def detect_encoding(self):
- """
- Try to find the most capable encoding supported by the console.
- slighly modified from the way IPython handles the same issue.
- """
- import locale
-
- encoding = None
- try:
- encoding = sys.stdin.encoding
- except AttributeError:
- pass
-
- if not encoding or encoding == 'ascii': # try again for better
- try:
- encoding = locale.getpreferredencoding()
- except Exception:
- pass
-
- if not encoding: # when all else fails. this will usually be "ascii"
- encoding = sys.getdefaultencoding()
-
- return encoding
-
- def reset(self):
- self.__init__()
-
-print_config = _GlobalPrintConfig()
-
+ set_option("print_config.float_format", EngFormatter(accuracy, use_eng_prefix))
+ set_option("print_config.column_space", max(12, accuracy + 9))
def _put_lines(buf, lines):
if any(isinstance(x, unicode) for x in lines):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index aeed377e35fa2..df764bb36a3c0 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -44,6 +44,8 @@
import pandas.core.nanops as nanops
import pandas.lib as lib
+from pandas.core.config import get_option
+
#----------------------------------------------------------------------
# Docstring templates
@@ -579,12 +581,11 @@ def _need_info_repr_(self):
Check if it is needed to use info/summary view to represent a
particular DataFrame.
"""
- config = fmt.print_config
terminal_width, terminal_height = get_terminal_size()
- max_rows = (terminal_height if config.max_rows == 0
- else config.max_rows)
- max_columns = config.max_columns
+ max_rows = (terminal_height if get_option("print_config.max_rows") == 0
+ else get_option("print_config.max_rows"))
+ max_columns = get_option("print_config.max_columns")
if max_columns > 0:
if len(self.index) <= max_rows and \
@@ -628,7 +629,7 @@ def _repr_html_(self):
Return a html representation for a particular DataFrame.
Mainly for IPython notebook.
"""
- if fmt.print_config.notebook_repr_html:
+ if get_option("print_config.notebook_repr_html"):
if self._need_info_repr_():
return None
else:
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 035d2531f382f..83f4d26fd7fb2 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -13,6 +13,7 @@
import pandas._algos as _algos
from pandas.lib import Timestamp
from pandas.util import py3compat
+from pandas.core.config import get_option
__all__ = ['Index']
@@ -1514,8 +1515,7 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
result_levels.append(level)
if sparsify is None:
- import pandas.core.format as fmt
- sparsify = fmt.print_config.multi_sparse
+ sparsify = get_option("print_config.multi_sparse")
if sparsify:
# little bit of a kludge job for #1217
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 8101dace1a15f..1a3baa223f0a4 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -33,6 +33,7 @@
from pandas.util.decorators import Appender, Substitution, cache_readonly
from pandas.compat.scipy import scoreatpercentile as _quantile
+from pandas.core.config import get_option
__all__ = ['Series', 'TimeSeries']
@@ -914,8 +915,8 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
def __repr__(self):
"""Clean string representation of a Series"""
width, height = get_terminal_size()
- max_rows = (height if fmt.print_config.max_rows == 0
- else fmt.print_config.max_rows)
+ max_rows = (height if get_option("print_config.max_rows") == 0
+ else get_option("print_config.max_rows"))
if len(self.index) > (max_rows or 1000):
result = self._tidy_repr(min(30, max_rows - 4))
elif len(self.index) > 0:
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
new file mode 100644
index 0000000000000..862b0d29ffcdf
--- /dev/null
+++ b/pandas/tests/test_config.py
@@ -0,0 +1,267 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+from __future__ import with_statement # support python 2.5
+import pandas as pd
+import unittest
+import warnings
+import nose
+
+class TestConfig(unittest.TestCase):
+
+ def __init__(self,*args):
+ super(TestConfig,self).__init__(*args)
+
+ from copy import deepcopy
+ self.cf = pd.core.config
+ self.gc=deepcopy(getattr(self.cf, '__global_config'))
+ self.do=deepcopy(getattr(self.cf, '__deprecated_options'))
+ self.ro=deepcopy(getattr(self.cf, '__registered_options'))
+
+ def setUp(self):
+ setattr(self.cf, '__global_config', {})
+ setattr(self.cf, '__deprecated_options', {})
+ setattr(self.cf, '__registered_options', {})
+
+ def tearDown(self):
+ setattr(self.cf, '__global_config',self.gc)
+ setattr(self.cf, '__deprecated_options', self.do)
+ setattr(self.cf, '__registered_options', self.ro)
+
+ def test_api(self):
+
+ #the pandas object exposes the user API
+ self.assertTrue(hasattr(pd, 'get_option'))
+ self.assertTrue(hasattr(pd, 'set_option'))
+ self.assertTrue(hasattr(pd, 'reset_option'))
+ self.assertTrue(hasattr(pd, 'reset_options'))
+ self.assertTrue(hasattr(pd, 'describe_options'))
+
+ def test_register_option(self):
+ self.cf.register_option('a', 1, 'doc')
+
+ # can't register an already registered option
+ self.assertRaises(KeyError, self.cf.register_option, 'a', 1, 'doc')
+
+ # can't register an already registered option
+ self.assertRaises(KeyError, self.cf.register_option, 'a.b.c.d1', 1,
+ 'doc')
+ self.assertRaises(KeyError, self.cf.register_option, 'a.b.c.d2', 1,
+ 'doc')
+
+ # we can register options several levels deep
+ # without predefining the intermediate steps
+ # and we can define differently named options
+ # in the same namespace
+ self.cf.register_option('k.b.c.d1', 1, 'doc')
+ self.cf.register_option('k.b.c.d2', 1, 'doc')
+
+ def test_describe_options(self):
+ self.cf.register_option('a', 1, 'doc')
+ self.cf.register_option('b', 1, 'doc2')
+ self.cf.deprecate_option('b')
+
+ self.cf.register_option('c.d.e1', 1, 'doc3')
+ self.cf.register_option('c.d.e2', 1, 'doc4')
+ self.cf.register_option('f', 1)
+ self.cf.register_option('g.h', 1)
+ self.cf.deprecate_option('g.h',rkey="blah")
+
+ # non-existent keys raise KeyError
+ self.assertRaises(KeyError, self.cf.describe_options, 'no.such.key')
+
+ # we can get the description for any key we registered
+ self.assertTrue('doc' in self.cf.describe_options('a',_print_desc=False))
+ self.assertTrue('doc2' in self.cf.describe_options('b',_print_desc=False))
+ self.assertTrue('precated' in self.cf.describe_options('b',_print_desc=False))
+
+ self.assertTrue('doc3' in self.cf.describe_options('c.d.e1',_print_desc=False))
+ self.assertTrue('doc4' in self.cf.describe_options('c.d.e2',_print_desc=False))
+
+ # if no doc is specified we get a default message
+ # saying "description not available"
+ self.assertTrue('vailable' in self.cf.describe_options('f',_print_desc=False))
+ self.assertTrue('vailable' in self.cf.describe_options('g.h',_print_desc=False))
+ self.assertTrue('precated' in self.cf.describe_options('g.h',_print_desc=False))
+ self.assertTrue('blah' in self.cf.describe_options('g.h',_print_desc=False))
+
+ def test_get_option(self):
+ self.cf.register_option('a', 1, 'doc')
+ self.cf.register_option('b.a', 'hullo', 'doc2')
+ self.cf.register_option('b.b', None, 'doc2')
+
+ # gets of existing keys succeed
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertTrue(self.cf.get_option('b.b') is None)
+
+ # gets of non-existent keys fail
+ self.assertRaises(KeyError, self.cf.get_option, 'no_such_option')
+
+ def test_set_option(self):
+ self.cf.register_option('a', 1, 'doc')
+ self.cf.register_option('b.a', 'hullo', 'doc2')
+ self.cf.register_option('b.b', None, 'doc2')
+
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+ self.assertTrue(self.cf.get_option('b.b') is None)
+
+ self.cf.set_option('a', 2)
+ self.cf.set_option('b.a', 'wurld')
+ self.cf.set_option('b.b', 1.1)
+
+ self.assertEqual(self.cf.get_option('a'), 2)
+ self.assertEqual(self.cf.get_option('b.a'), 'wurld')
+ self.assertEqual(self.cf.get_option('b.b'), 1.1)
+
+ self.assertRaises(KeyError, self.cf.set_option, 'no.such.key', None)
+
+ def test_validation(self):
+ self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
+ self.cf.register_option('b.a', 'hullo', 'doc2',
+ validator=self.cf.is_text)
+ self.assertRaises(ValueError, self.cf.register_option, 'a.b.c.d2',
+ 'NO', 'doc', validator=self.cf.is_int)
+
+ self.cf.set_option('a', 2) # int is_int
+ self.cf.set_option('b.a', 'wurld') # str is_str
+
+ self.assertRaises(ValueError, self.cf.set_option, 'a', None) # None not is_int
+ self.assertRaises(ValueError, self.cf.set_option, 'a', 'ab')
+ self.assertRaises(ValueError, self.cf.set_option, 'b.a', 1)
+
+ def test_reset_option(self):
+ self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
+ self.cf.register_option('b.a', 'hullo', 'doc2',
+ validator=self.cf.is_str)
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+
+ self.cf.set_option('a', 2)
+ self.cf.set_option('b.a', 'wurld')
+ self.assertEqual(self.cf.get_option('a'), 2)
+ self.assertEqual(self.cf.get_option('b.a'), 'wurld')
+
+ self.cf.reset_option('a')
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b.a'), 'wurld')
+ self.cf.reset_option('b.a')
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+
+ def test_reset_options(self):
+ self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
+ self.cf.register_option('b.a', 'hullo', 'doc2',
+ validator=self.cf.is_str)
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+
+ self.cf.set_option('a', 2)
+ self.cf.set_option('b.a', 'wurld')
+ self.assertEqual(self.cf.get_option('a'), 2)
+ self.assertEqual(self.cf.get_option('b.a'), 'wurld')
+
+ self.cf.reset_options()
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b.a'), 'hullo')
+
+
+ def test_deprecate_option(self):
+ import sys
+ self.cf.deprecate_option('c') # we can deprecate non-existent options
+
+ # testing warning with catch_warning was only added in 2.6
+ if sys.version_info[:2]<(2,6):
+ raise nose.SkipTest()
+
+ self.assertTrue(self.cf._is_deprecated('c'))
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always')
+ try:
+ self.cf.get_option('c')
+ except KeyError:
+ pass
+ else:
+ self.fail("Nonexistent option didn't raise KeyError")
+
+ self.assertEqual(len(w), 1) # should have raised one warning
+ self.assertTrue('deprecated' in str(w[-1])) # we get the default message
+
+ self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
+ self.cf.register_option('b.a', 'hullo', 'doc2')
+ self.cf.register_option('c', 'hullo', 'doc2')
+
+ self.cf.deprecate_option('a', removal_ver='nifty_ver')
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always')
+ self.cf.get_option('a')
+
+ self.assertEqual(len(w), 1) # should have raised one warning
+ self.assertTrue('eprecated' in str(w[-1])) # we get the default message
+ self.assertTrue('nifty_ver' in str(w[-1])) # with the removal_ver quoted
+
+ self.assertRaises(KeyError, self.cf.deprecate_option, 'a') # can't depr. twice
+
+ self.cf.deprecate_option('b.a', 'zounds!')
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always')
+ self.cf.get_option('b.a')
+
+ self.assertEqual(len(w), 1) # should have raised one warning
+ self.assertTrue('zounds!' in str(w[-1])) # we get the custom message
+
+ # test rerouting keys
+ self.cf.register_option('d.a', 'foo', 'doc2')
+ self.cf.register_option('d.dep', 'bar', 'doc2')
+ self.assertEqual(self.cf.get_option('d.a'), 'foo')
+ self.assertEqual(self.cf.get_option('d.dep'), 'bar')
+
+ self.cf.deprecate_option('d.dep', rkey='d.a') # reroute d.dep to d.a
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always')
+ self.assertEqual(self.cf.get_option('d.dep'), 'foo')
+
+ self.assertEqual(len(w), 1) # should have raised one warning
+ self.assertTrue('eprecated' in str(w[-1])) # we get the custom message
+
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always')
+ self.cf.set_option('d.dep', 'baz') # should overwrite "d.a"
+
+ self.assertEqual(len(w), 1) # should have raised one warning
+ self.assertTrue('eprecated' in str(w[-1])) # we get the custom message
+
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always')
+ self.assertEqual(self.cf.get_option('d.dep'), 'baz')
+
+ self.assertEqual(len(w), 1) # should have raised one warning
+ self.assertTrue('eprecated' in str(w[-1])) # we get the custom message
+
+ def test_config_prefix(self):
+ with self.cf.config_prefix("base"):
+ self.cf.register_option('a',1,"doc1")
+ self.cf.register_option('b',2,"doc2")
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b'), 2)
+
+ self.cf.set_option('a',3)
+ self.cf.set_option('b',4)
+ self.assertEqual(self.cf.get_option('a'), 3)
+ self.assertEqual(self.cf.get_option('b'), 4)
+
+ self.assertEqual(self.cf.get_option('base.a'), 3)
+ self.assertEqual(self.cf.get_option('base.b'), 4)
+ self.assertTrue('doc1' in self.cf.describe_options('base.a',_print_desc=False))
+ self.assertTrue('doc2' in self.cf.describe_options('base.b',_print_desc=False))
+
+ self.cf.reset_option('base.a')
+ self.cf.reset_option('base.b')
+
+ with self.cf.config_prefix("base"):
+ self.assertEqual(self.cf.get_option('a'), 1)
+ self.assertEqual(self.cf.get_option('b'), 2)
+
+
+# fmt.reset_printoptions and fmt.set_printoptions were altered
+# to use core.config, test_format exercises those paths.
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 542e5ee964362..7238ae134252b 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -19,6 +19,7 @@
import pandas.util.testing as tm
import pandas
import pandas as pd
+from pandas.core.config import set_option,get_option
_frame = DataFrame(tm.getSeriesData())
@@ -64,7 +65,7 @@ def test_repr_tuples(self):
def test_repr_truncation(self):
max_len = 20
- fmt.print_config.max_colwidth = max_len
+ set_option("print_config.max_colwidth", max_len)
df = DataFrame({'A': np.random.randn(10),
'B': [tm.rands(np.random.randint(max_len - 1,
max_len + 1)) for i in range(10)]})
@@ -76,10 +77,10 @@ def test_repr_truncation(self):
else:
self.assert_('...' not in line)
- fmt.print_config.max_colwidth = None
+ set_option("print_config.max_colwidth", 999999)
self.assert_('...' not in repr(df))
- fmt.print_config.max_colwidth = max_len + 2
+ set_option("print_config.max_colwidth", max_len + 2)
self.assert_('...' not in repr(df))
def test_repr_should_return_str (self):
@@ -425,7 +426,7 @@ def test_to_string_float_formatting(self):
assert(df_s == expected)
fmt.reset_printoptions()
- self.assertEqual(fmt.print_config.precision, 7)
+ self.assertEqual(get_option("print_config.precision"), 7)
df = DataFrame({'x': [1e9, 0.2512]})
df_s = df.to_string()
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index 9e1c451c42887..dbb75f1e749c0 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -152,7 +152,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
-------
datetime, datetime/dateutil.parser._result, str
"""
- from pandas.core.format import print_config
+ from pandas.core.config import get_option
from pandas.tseries.offsets import DateOffset
from pandas.tseries.frequencies import (_get_rule_month, _month_numbers,
_get_freq_str)
@@ -221,9 +221,9 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
return mresult
if dayfirst is None:
- dayfirst = print_config.date_dayfirst
+ dayfirst = get_option("print_config.date_dayfirst")
if yearfirst is None:
- yearfirst = print_config.date_yearfirst
+ yearfirst = get_option("print_config.date_yearfirst")
try:
parsed = parse(arg, dayfirst=dayfirst, yearfirst=yearfirst)
| PR for #2081
## Summary
- core.config is a new module which serves as a general mechanism for working with configurables.
- superceds fmt.print_config, while remaining backward-compatible (set_printoptions(), reset_printoptions() are still working).
- adds the following user API's: `get_option()`, `set_option`, `reset_option()`, `reset_options()` and `describe_option()` (all under the pandas top level module).
- adds the following for developer use: `register_option()` and `deprecate_option()`.
## TL;DR description
```
ENH: Add core.config module for managing package-wide configurables
The config module holds package-wide configurables and provides
a uniform API for working with them.
Overview
========
This module supports the following requirements:
- options are referenced using keys in dot.notation, e.g. "x.y.option - z".
- options can be registered by modules at import time.
- options can be registered at init-time (via core.config_init)
- options have a default value, and (optionally) a description and
validation function associated with them.
- options can be deprecated, in which case referencing them
should produce a warning.
- deprecated options can optionally be rerouted to a replacement
so that accessing a deprecated option reroutes to a differently
named option.
- options can be reset to their default value.
- all option can be reset to their default value at once.
- all options in a certain sub - namespace can be reset at once.
- the user can set / get / reset or ask for the description of an option.
- a developer can register and mark an option as deprecated.
Implementation
==============
- Data is stored using nested dictionaries, and should be accessed
through the provided API.
- "Registered options" and "Deprecated options" have metadata associcated
with them, which are stored in auxilary dictionaries keyed on the
fully-qualified key, e.g. "x.y.z.option".
- the config_init module is imported by the package's __init__.py file.
placing any register_option() calls there will ensure those options
are available as soon as pandas is loaded. If you use register_option
in a module, it will only be available after that module is imported,
which you should be aware of.
- `config_prefix` is a context_manager (for use with the `with` keyword)
which can save developers some typing, see the docstring.
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2097 | 2012-10-21T11:28:51Z | 2012-11-27T22:19:28Z | 2012-11-27T22:19:27Z | 2014-06-13T09:21:50Z |
STY: pep8 | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index d5380b66a43f6..cb7314a26689f 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -9,6 +9,7 @@
import pandas.lib as lib
import pandas._algos as _algos
+
def match(to_match, values, na_sentinel=-1):
"""
Compute locations of to_match into values
@@ -36,6 +37,7 @@ def match(to_match, values, na_sentinel=-1):
f = lambda htype, caster: _match_generic(to_match, values, htype, caster)
return _hashtable_algo(f, values.dtype)
+
def unique(values):
"""
Compute unique values (not necessarily sorted) efficiently from input array
@@ -62,6 +64,7 @@ def count(values, uniques=None):
else:
return _hashtable_algo(f, values.dtype)
+
def _hashtable_algo(f, dtype):
"""
f(HashTable, type_caster) -> result
@@ -83,6 +86,7 @@ def _count_generic(values, table_type, type_caster):
return Series(counts, index=uniques)
+
def _match_generic(values, index, table_type, type_caster):
values = type_caster(values)
index = type_caster(index)
@@ -90,6 +94,7 @@ def _match_generic(values, index, table_type, type_caster):
table.map_locations(index)
return table.lookup(values)
+
def _unique_generic(values, table_type, type_caster):
values = type_caster(values)
table = table_type(min(len(values), 1000000))
@@ -138,6 +143,7 @@ def factorize(values, sort=False, order=None, na_sentinel=-1):
return labels, uniques, counts
+
def value_counts(values, sort=True, ascending=False):
"""
Compute a histogram of the counts of non-null values
@@ -192,6 +198,7 @@ def rank(values, axis=0, method='average', na_option='keep',
ascending=ascending)
return ranks
+
def quantile(x, q, interpolation_method='fraction'):
"""
Compute sample quantile or quantiles of the input array. For example, q=0.5
@@ -254,8 +261,8 @@ def _get_score(at):
elif interpolation_method == 'higher':
score = values[np.ceil(idx)]
else:
- raise ValueError("interpolation_method can only be 'fraction', " \
- "'lower' or 'higher'")
+ raise ValueError("interpolation_method can only be 'fraction' "
+ ", 'lower' or 'higher'")
return score
@@ -265,11 +272,12 @@ def _get_score(at):
q = np.asarray(q, np.float64)
return _algos.arrmap_float64(q, _get_score)
+
def _interpolate(a, b, fraction):
"""Returns the point at the given fraction between a and b, where
'fraction' must be between 0 and 1.
"""
- return a + (b - a)*fraction
+ return a + (b - a) * fraction
def _get_data_algo(values, func_map):
@@ -287,6 +295,7 @@ def _get_data_algo(values, func_map):
values = com._ensure_object(values)
return f, values
+
def group_position(*args):
"""
Get group position
@@ -303,19 +312,19 @@ def group_position(*args):
_rank1d_functions = {
- 'float64' : lib.rank_1d_float64,
- 'int64' : lib.rank_1d_int64,
- 'generic' : lib.rank_1d_generic
+ 'float64': lib.rank_1d_float64,
+ 'int64': lib.rank_1d_int64,
+ 'generic': lib.rank_1d_generic
}
_rank2d_functions = {
- 'float64' : lib.rank_2d_float64,
- 'int64' : lib.rank_2d_int64,
- 'generic' : lib.rank_2d_generic
+ 'float64': lib.rank_2d_float64,
+ 'int64': lib.rank_2d_int64,
+ 'generic': lib.rank_2d_generic
}
_hashtables = {
- 'float64' : lib.Float64HashTable,
- 'int64' : lib.Int64HashTable,
- 'generic' : lib.PyObjectHashTable
+ 'float64': lib.Float64HashTable,
+ 'int64': lib.Int64HashTable,
+ 'generic': lib.PyObjectHashTable
}
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 34b05d1a2c01a..1ff23bcce2a9b 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -24,6 +24,7 @@ def f(self, other):
return f
+
class Categorical(object):
"""
Represents a categorical variable in classic R / S-plus fashion
@@ -60,6 +61,7 @@ def from_array(cls, data):
name=getattr(data, 'name', None))
_levels = None
+
def _set_levels(self, levels):
from pandas.core.index import _ensure_index
@@ -95,7 +97,8 @@ def __repr__(self):
indent = ' ' * (levstring.find('[') + len(levheader) + 1)
lines = levstring.split('\n')
- levstring = '\n'.join([lines[0]] + [indent + x.lstrip() for x in lines[1:]])
+ levstring = '\n'.join([lines[0]] +
+ [indent + x.lstrip() for x in lines[1:]])
return temp % ('' if self.name is None else self.name,
repr(values), levheader + levstring)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 8e851c67176f1..c400a5e11002e 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -30,15 +30,18 @@ def next(x):
try:
np.seterr(all='ignore')
# np.set_printoptions(suppress=True)
-except Exception: # pragma: no cover
+except Exception: # pragma: no cover
pass
+
class PandasError(Exception):
pass
+
class AmbiguousIndexError(PandasError, KeyError):
pass
+
def isnull(obj):
'''
Replacement for numpy.isnan / -numpy.isfinite which is suitable
@@ -66,6 +69,7 @@ def isnull(obj):
else:
return obj is None
+
def _isnull_ndarraylike(obj):
from pandas import Series
values = np.asarray(obj)
@@ -90,6 +94,7 @@ def _isnull_ndarraylike(obj):
result = -np.isfinite(obj)
return result
+
def notnull(obj):
'''
Replacement for numpy.isfinite / -numpy.isnan which is suitable
@@ -108,6 +113,7 @@ def notnull(obj):
return not res
return -res
+
def mask_missing(arr, values_to_mask):
"""
Return a masking array of same size/shape as arr
@@ -139,6 +145,7 @@ def mask_missing(arr, values_to_mask):
return mask
+
def _pickle_array(arr):
arr = arr.view(np.ndarray)
@@ -147,10 +154,12 @@ def _pickle_array(arr):
return buf.getvalue()
+
def _unpickle_array(bytes):
arr = read_array(BytesIO(bytes))
return arr
+
def _view_wrapper(f, wrap_dtype, na_override=None):
def wrapper(arr, indexer, out, fill_value=np.nan):
if na_override is not None and np.isnan(fill_value):
@@ -162,45 +171,46 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
_take1d_dict = {
- 'float64' : _algos.take_1d_float64,
- 'int32' : _algos.take_1d_int32,
- 'int64' : _algos.take_1d_int64,
- 'object' : _algos.take_1d_object,
- 'bool' : _view_wrapper(_algos.take_1d_bool, np.uint8),
- 'datetime64[ns]' : _view_wrapper(_algos.take_1d_int64, np.int64,
- na_override=lib.iNaT),
+ 'float64': _algos.take_1d_float64,
+ 'int32': _algos.take_1d_int32,
+ 'int64': _algos.take_1d_int64,
+ 'object': _algos.take_1d_object,
+ 'bool': _view_wrapper(_algos.take_1d_bool, np.uint8),
+ 'datetime64[ns]': _view_wrapper(_algos.take_1d_int64, np.int64,
+ na_override=lib.iNaT),
}
_take2d_axis0_dict = {
- 'float64' : _algos.take_2d_axis0_float64,
- 'int32' : _algos.take_2d_axis0_int32,
- 'int64' : _algos.take_2d_axis0_int64,
- 'object' : _algos.take_2d_axis0_object,
- 'bool' : _view_wrapper(_algos.take_2d_axis0_bool, np.uint8),
- 'datetime64[ns]' : _view_wrapper(_algos.take_2d_axis0_int64, np.int64,
- na_override=lib.iNaT),
+ 'float64': _algos.take_2d_axis0_float64,
+ 'int32': _algos.take_2d_axis0_int32,
+ 'int64': _algos.take_2d_axis0_int64,
+ 'object': _algos.take_2d_axis0_object,
+ 'bool': _view_wrapper(_algos.take_2d_axis0_bool, np.uint8),
+ 'datetime64[ns]': _view_wrapper(_algos.take_2d_axis0_int64, np.int64,
+ na_override=lib.iNaT),
}
_take2d_axis1_dict = {
- 'float64' : _algos.take_2d_axis1_float64,
- 'int32' : _algos.take_2d_axis1_int32,
- 'int64' : _algos.take_2d_axis1_int64,
- 'object' : _algos.take_2d_axis1_object,
- 'bool' : _view_wrapper(_algos.take_2d_axis1_bool, np.uint8),
- 'datetime64[ns]' : _view_wrapper(_algos.take_2d_axis1_int64, np.int64,
+ 'float64': _algos.take_2d_axis1_float64,
+ 'int32': _algos.take_2d_axis1_int32,
+ 'int64': _algos.take_2d_axis1_int64,
+ 'object': _algos.take_2d_axis1_object,
+ 'bool': _view_wrapper(_algos.take_2d_axis1_bool, np.uint8),
+ 'datetime64[ns]': _view_wrapper(_algos.take_2d_axis1_int64, np.int64,
na_override=lib.iNaT),
}
_take2d_multi_dict = {
- 'float64' : _algos.take_2d_multi_float64,
- 'int32' : _algos.take_2d_multi_int32,
- 'int64' : _algos.take_2d_multi_int64,
- 'object' : _algos.take_2d_multi_object,
- 'bool' : _view_wrapper(_algos.take_2d_multi_bool, np.uint8),
- 'datetime64[ns]' : _view_wrapper(_algos.take_2d_multi_int64, np.int64,
- na_override=lib.iNaT),
+ 'float64': _algos.take_2d_multi_float64,
+ 'int32': _algos.take_2d_multi_int32,
+ 'int64': _algos.take_2d_multi_int64,
+ 'object': _algos.take_2d_multi_object,
+ 'bool': _view_wrapper(_algos.take_2d_multi_bool, np.uint8),
+ 'datetime64[ns]': _view_wrapper(_algos.take_2d_multi_int64, np.int64,
+ na_override=lib.iNaT),
}
+
def _get_take2d_function(dtype_str, axis=0):
if axis == 0:
return _take2d_axis0_dict[dtype_str]
@@ -208,9 +218,10 @@ def _get_take2d_function(dtype_str, axis=0):
return _take2d_axis1_dict[dtype_str]
elif axis == 'multi':
return _take2d_multi_dict[dtype_str]
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('bad axis: %s' % axis)
+
def take_1d(arr, indexer, out=None, fill_value=np.nan):
"""
Specialized Cython take which sets NaN values in one pass
@@ -258,6 +269,7 @@ def take_1d(arr, indexer, out=None, fill_value=np.nan):
return out
+
def take_2d_multi(arr, row_idx, col_idx, fill_value=np.nan, out=None):
dtype_str = arr.dtype.name
@@ -266,7 +278,7 @@ def take_2d_multi(arr, row_idx, col_idx, fill_value=np.nan, out=None):
if dtype_str in ('int32', 'int64', 'bool'):
row_mask = row_idx == -1
- col_mask= col_idx == -1
+ col_mask = col_idx == -1
needs_masking = row_mask.any() or col_mask.any()
if needs_masking:
@@ -348,15 +360,18 @@ def take_2d(arr, indexer, out=None, mask=None, needs_masking=None, axis=0,
fill_value=fill_value)
return result
+
def ndtake(arr, indexer, axis=0, out=None):
return arr.take(_ensure_platform_int(indexer), axis=axis, out=out)
+
def mask_out_axis(arr, mask, axis, fill_value=np.nan):
indexer = [slice(None)] * arr.ndim
indexer[axis] = mask
arr[tuple(indexer)] = fill_value
+
def take_fast(arr, indexer, mask, needs_masking, axis=0, out=None,
fill_value=np.nan):
if arr.ndim == 2:
@@ -369,6 +384,7 @@ def take_fast(arr, indexer, mask, needs_masking, axis=0, out=None,
out_passed=out is not None, fill_value=fill_value)
return result
+
def _maybe_mask(result, mask, needs_masking, axis=0, out_passed=False,
fill_value=np.nan):
if needs_masking:
@@ -380,6 +396,7 @@ def _maybe_mask(result, mask, needs_masking, axis=0, out_passed=False,
mask_out_axis(result, mask, axis, fill_value)
return result
+
def _maybe_upcast(values):
if issubclass(values.dtype.type, np.integer):
values = values.astype(float)
@@ -388,11 +405,13 @@ def _maybe_upcast(values):
return values
+
def _need_upcast(values):
if issubclass(values.dtype.type, (np.integer, np.bool_)):
return True
return False
+
def _interp_wrapper(f, wrap_dtype, na_override=None):
def wrapper(arr, mask, limit=None):
view = arr.view(wrap_dtype)
@@ -401,8 +420,11 @@ def wrapper(arr, mask, limit=None):
_pad_1d_datetime = _interp_wrapper(_algos.pad_inplace_int64, np.int64)
_pad_2d_datetime = _interp_wrapper(_algos.pad_2d_inplace_int64, np.int64)
-_backfill_1d_datetime = _interp_wrapper(_algos.backfill_inplace_int64, np.int64)
-_backfill_2d_datetime = _interp_wrapper(_algos.backfill_2d_inplace_int64, np.int64)
+_backfill_1d_datetime = _interp_wrapper(_algos.backfill_inplace_int64,
+ np.int64)
+_backfill_2d_datetime = _interp_wrapper(_algos.backfill_2d_inplace_int64,
+ np.int64)
+
def pad_1d(values, limit=None, mask=None):
if is_float_dtype(values):
@@ -411,7 +433,7 @@ def pad_1d(values, limit=None, mask=None):
_method = _pad_1d_datetime
elif values.dtype == np.object_:
_method = _algos.pad_inplace_object
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('Invalid dtype for padding')
if mask is None:
@@ -419,6 +441,7 @@ def pad_1d(values, limit=None, mask=None):
mask = mask.view(np.uint8)
_method(values, mask, limit=limit)
+
def backfill_1d(values, limit=None, mask=None):
if is_float_dtype(values):
_method = _algos.backfill_inplace_float64
@@ -426,7 +449,7 @@ def backfill_1d(values, limit=None, mask=None):
_method = _backfill_1d_datetime
elif values.dtype == np.object_:
_method = _algos.backfill_inplace_object
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('Invalid dtype for padding')
if mask is None:
@@ -435,6 +458,7 @@ def backfill_1d(values, limit=None, mask=None):
_method(values, mask, limit=limit)
+
def pad_2d(values, limit=None, mask=None):
if is_float_dtype(values):
_method = _algos.pad_2d_inplace_float64
@@ -442,7 +466,7 @@ def pad_2d(values, limit=None, mask=None):
_method = _pad_2d_datetime
elif values.dtype == np.object_:
_method = _algos.pad_2d_inplace_object
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('Invalid dtype for padding')
if mask is None:
@@ -455,6 +479,7 @@ def pad_2d(values, limit=None, mask=None):
# for test coverage
pass
+
def backfill_2d(values, limit=None, mask=None):
if is_float_dtype(values):
_method = _algos.backfill_2d_inplace_float64
@@ -462,7 +487,7 @@ def backfill_2d(values, limit=None, mask=None):
_method = _backfill_2d_datetime
elif values.dtype == np.object_:
_method = _algos.backfill_2d_inplace_object
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('Invalid dtype for padding')
if mask is None:
@@ -475,6 +500,7 @@ def backfill_2d(values, limit=None, mask=None):
# for test coverage
pass
+
def _consensus_name_attr(objs):
name = objs[0].name
for obj in objs[1:]:
@@ -485,6 +511,7 @@ def _consensus_name_attr(objs):
#----------------------------------------------------------------------
# Lots of little utilities
+
def _infer_dtype(value):
if isinstance(value, (float, np.floating)):
return np.float_
@@ -495,15 +522,17 @@ def _infer_dtype(value):
else:
return np.object_
+
def _possibly_cast_item(obj, item, dtype):
chunk = obj[item]
if chunk.values.dtype != dtype:
if dtype in (np.object_, np.bool_):
obj[item] = chunk.astype(np.object_)
- elif not issubclass(dtype, (np.integer, np.bool_)): # pragma: no cover
+ elif not issubclass(dtype, (np.integer, np.bool_)): # pragma: no cover
raise ValueError("Unexpected dtype encountered: %s" % dtype)
+
def _is_bool_indexer(key):
if isinstance(key, np.ndarray) and key.dtype == np.object_:
key = np.asarray(key)
@@ -519,21 +548,24 @@ def _is_bool_indexer(key):
elif isinstance(key, list):
try:
return np.asarray(key).dtype == np.bool_
- except TypeError: # pragma: no cover
+ except TypeError: # pragma: no cover
return False
return False
+
def _default_index(n):
from pandas.core.index import Index
return Index(np.arange(n))
+
def ensure_float(arr):
if issubclass(arr.dtype.type, np.integer):
arr = arr.astype(float)
return arr
+
def _mut_exclusive(arg1, arg2):
if arg1 is not None and arg2 is not None:
raise Exception('mutually exclusive arguments')
@@ -542,18 +574,21 @@ def _mut_exclusive(arg1, arg2):
else:
return arg2
+
def _any_none(*args):
for arg in args:
if arg is None:
return True
return False
+
def _all_not_none(*args):
for arg in args:
if arg is None:
return False
return True
+
def _try_sort(iterable):
listed = list(iterable)
try:
@@ -561,17 +596,20 @@ def _try_sort(iterable):
except Exception:
return listed
+
def _count_not_none(*args):
return sum(x is not None for x in args)
#------------------------------------------------------------------------------
# miscellaneous python tools
+
def rands(n):
"""Generates a random alphanumeric string of length *n*"""
from random import Random
import string
- return ''.join(Random().sample(string.ascii_letters+string.digits, n))
+ return ''.join(Random().sample(string.ascii_letters + string.digits, n))
+
def adjoin(space, *lists):
"""
@@ -595,6 +633,7 @@ def adjoin(space, *lists):
out_lines.append(_join_unicode(lines))
return _join_unicode(out_lines, sep='\n')
+
def _join_unicode(lines, sep=''):
try:
return sep.join(lines)
@@ -603,6 +642,7 @@ def _join_unicode(lines, sep=''):
return sep.join([x.decode('utf-8') if isinstance(x, str) else x
for x in lines])
+
def iterpairs(seq):
"""
Parameters
@@ -625,10 +665,12 @@ def iterpairs(seq):
return itertools.izip(seq_it, seq_it_next)
+
def indent(string, spaces=4):
dent = ' ' * spaces
return '\n'.join([dent + x for x in string.split('\n')])
+
def banner(message):
"""
Return 80-char width message declaration with = bars on top and bottom.
@@ -636,6 +678,7 @@ def banner(message):
bar = '=' * 80
return '%s\n%s\n%s' % (bar, message, bar)
+
class groupby(dict):
"""
A simple groupby different from the one in itertools.
@@ -643,7 +686,7 @@ class groupby(dict):
Does not require the sequence elements to be sorted by keys,
however it is slower.
"""
- def __init__(self, seq, key=lambda x:x):
+ def __init__(self, seq, key=lambda x: x):
for value in seq:
k = key(value)
self.setdefault(k, []).append(value)
@@ -654,6 +697,7 @@ def __init__(self, seq, key=lambda x:x):
def __iter__(self):
return iter(dict.items(self))
+
def map_indices_py(arr):
"""
Returns a dictionary with (element, index) pairs for each element in the
@@ -661,6 +705,7 @@ def map_indices_py(arr):
"""
return dict([(x, i) for i, x in enumerate(arr)])
+
def union(*seqs):
result = set([])
for seq in seqs:
@@ -669,9 +714,11 @@ def union(*seqs):
result |= seq
return type(seqs[0])(list(result))
+
def difference(a, b):
return type(a)(list(set(a) - set(b)))
+
def intersection(*seqs):
result = set(seqs[0])
for seq in seqs:
@@ -680,6 +727,7 @@ def intersection(*seqs):
result &= seq
return type(seqs[0])(list(result))
+
def _asarray_tuplesafe(values, dtype=None):
from pandas.core.index import Index
@@ -707,6 +755,7 @@ def _asarray_tuplesafe(values, dtype=None):
return result
+
def _index_labels_to_array(labels):
if isinstance(labels, (basestring, tuple)):
labels = [labels]
@@ -714,31 +763,37 @@ def _index_labels_to_array(labels):
if not isinstance(labels, (list, np.ndarray)):
try:
labels = list(labels)
- except TypeError: # non-iterable
+ except TypeError: # non-iterable
labels = [labels]
labels = _asarray_tuplesafe(labels)
return labels
+
def _maybe_make_list(obj):
if obj is not None and not isinstance(obj, (tuple, list)):
return [obj]
return obj
+
def is_integer(obj):
return isinstance(obj, (int, long, np.integer))
+
def is_float(obj):
return isinstance(obj, (float, np.floating))
+
def is_iterator(obj):
# python 3 generators have __next__ instead of next
return hasattr(obj, 'next') or hasattr(obj, '__next__')
+
def is_number(obj):
return isinstance(obj, (np.number, int, long, float))
+
def is_integer_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype.type
@@ -747,6 +802,7 @@ def is_integer_dtype(arr_or_dtype):
return (issubclass(tipo, np.integer) and not
issubclass(tipo, np.datetime64))
+
def is_datetime64_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype.type
@@ -754,6 +810,7 @@ def is_datetime64_dtype(arr_or_dtype):
tipo = arr_or_dtype.dtype.type
return issubclass(tipo, np.datetime64)
+
def is_float_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype.type
@@ -761,9 +818,11 @@ def is_float_dtype(arr_or_dtype):
tipo = arr_or_dtype.dtype.type
return issubclass(tipo, np.floating)
+
def is_list_like(arg):
return hasattr(arg, '__iter__') and not isinstance(arg, basestring)
+
def _is_sequence(x):
try:
iter(x)
@@ -797,6 +856,7 @@ def _astype_nansafe(arr, dtype):
return arr.astype(dtype)
+
def _clean_fill_method(method):
method = method.lower()
if method == 'ffill':
@@ -804,11 +864,12 @@ def _clean_fill_method(method):
if method == 'bfill':
method = 'backfill'
if method not in ['pad', 'backfill']:
- msg = ('Invalid fill method. Expecting pad (ffill) or backfill (bfill).'
- ' Got %s' % method)
+ msg = ('Invalid fill method. Expecting pad (ffill) or backfill '
+ '(bfill). Got %s' % method)
raise ValueError(msg)
return method
+
def _all_none(*args):
for arg in args:
if arg is not None:
@@ -853,6 +914,7 @@ def load(path):
finally:
f.close()
+
class UTF8Recoder:
"""
Iterator that reads an encoded stream and reencodes the input to UTF-8
@@ -866,6 +928,7 @@ def __iter__(self):
def next(self):
return self.reader.next().encode("utf-8")
+
def _get_handle(path, mode, encoding=None):
if py3compat.PY3: # pragma: no cover
if encoding:
@@ -916,11 +979,11 @@ def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
self.stream = f
self.encoder = codecs.getincrementalencoder(encoding)()
- self.quoting=kwds.get("quoting",None)
+ self.quoting = kwds.get("quoting", None)
def writerow(self, row):
def _check_as_is(x):
- return (self.quoting == csv.QUOTE_NONNUMERIC and \
+ return (self.quoting == csv.QUOTE_NONNUMERIC and
is_number(x)) or isinstance(x, str)
row = [x if _check_as_is(x)
@@ -940,6 +1003,7 @@ def _check_as_is(x):
_NS_DTYPE = np.dtype('M8[ns]')
+
def _concat_compat(to_concat, axis=0):
# filter empty arrays
to_concat = [x for x in to_concat if x.shape[axis] > 0]
@@ -955,8 +1019,8 @@ def _concat_compat(to_concat, axis=0):
# Unicode consolidation
# ---------------------
#
-# pprinting utility functions for generating Unicode text or bytes(3.x)/str(2.x)
-# representations of objects.
+# pprinting utility functions for generating Unicode text or
+# bytes(3.x)/str(2.x) representations of objects.
# Try to use these as much as possible rather then rolling your own.
#
# When to use
@@ -973,21 +1037,24 @@ def _concat_compat(to_concat, axis=0):
# console_encode() should (hopefully) choose the right encoding for you
# based on the encoding set in fmt.print_config.encoding.
#
-# 3) if you need to write something out to file, use pprint_thing_encoded(encoding).
+# 3) if you need to write something out to file, use
+# pprint_thing_encoded(encoding).
#
-# If no encoding is specified, it defaults to utf-8. SInce encoding pure ascii with
-# utf-8 is a no-op you can safely use the default utf-8 if you're working with
-# straight ascii.
+# If no encoding is specified, it defaults to utf-8. Since encoding pure
+# ascii with utf-8 is a no-op you can safely use the default utf-8 if you're
+# working with straight ascii.
-def _pprint_seq(seq,_nest_lvl=0):
+
+def _pprint_seq(seq, _nest_lvl=0):
"""
internal. pprinter for iterables. you should probably use pprint_thing()
rather then calling this directly.
"""
- fmt=u"[%s]" if hasattr(seq,'__setitem__') else u"(%s)"
- return fmt % ", ".join(pprint_thing(e,_nest_lvl+1) for e in seq)
+ fmt = u"[%s]" if hasattr(seq, '__setitem__') else u"(%s)"
+ return fmt % ", ".join(pprint_thing(e, _nest_lvl + 1) for e in seq)
+
-def pprint_thing(thing,_nest_lvl=0):
+def pprint_thing(thing, _nest_lvl=0):
"""
This function is the sanctioned way of converting objects
to a unicode representation.
@@ -1011,7 +1078,7 @@ def pprint_thing(thing,_nest_lvl=0):
if thing is None:
result = ''
elif _is_sequence(thing) and _nest_lvl < print_config.pprint_nest_depth:
- result = _pprint_seq(thing,_nest_lvl)
+ result = _pprint_seq(thing, _nest_lvl)
else:
# when used internally in the package, everything
# passed in should be a unicode object or have a unicode
@@ -1021,17 +1088,19 @@ def pprint_thing(thing,_nest_lvl=0):
# so we resort to utf-8 with replacing errors
try:
- result = unicode(thing) # we should try this first
+ result = unicode(thing) # we should try this first
except UnicodeDecodeError:
# either utf-8 or we replace errors
- result = str(thing).decode('utf-8',"replace")
+ result = str(thing).decode('utf-8', "replace")
- return unicode(result) # always unicode
+ return unicode(result) # always unicode
-def pprint_thing_encoded(object,encoding='utf-8',errors='replace'):
- value=pprint_thing(object) # get unicode representation of object
+
+def pprint_thing_encoded(object, encoding='utf-8', errors='replace'):
+ value = pprint_thing(object) # get unicode representation of object
return value.encode(encoding, errors)
+
def console_encode(object):
from pandas.core.format import print_config
"""
@@ -1041,4 +1110,4 @@ def console_encode(object):
set in print_config.encoding. Use this everywhere
where you output to the console.
"""
- return pprint_thing_encoded(object,print_config.encoding)
+ return pprint_thing_encoded(object, print_config.encoding)
diff --git a/pandas/core/daterange.py b/pandas/core/daterange.py
index 4bf6ee5a1517e..bfed7fcc6a734 100644
--- a/pandas/core/daterange.py
+++ b/pandas/core/daterange.py
@@ -37,7 +37,7 @@ def __setstate__(self, aug_state):
# for backwards compatibility
if len(aug_state) > 2:
tzinfo = aug_state[2]
- else: # pragma: no cover
+ else: # pragma: no cover
tzinfo = None
self.offset = offset
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 3bc3792200fc0..21528769648f5 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -56,6 +56,7 @@
-------
formatted : string (or unicode, depending on data and options)"""
+
class SeriesFormatter(object):
def __init__(self, series, buf=None, header=True, length=True,
@@ -152,6 +153,7 @@ def _strlen(x):
except UnicodeError:
return len(x)
+
class DataFrameFormatter(object):
"""
Render a DataFrame
@@ -243,9 +245,9 @@ def make_unicode(x):
return x.decode('utf-8')
strcols = map(lambda col: map(make_unicode, col), strcols)
else:
- # generally everything is plain strings, which has ascii
- # encoding. problem is when there is a char with value over 127
- # - everything then gets converted to unicode.
+ # Generally everything is plain strings, which has ascii
+ # encoding. Problem is when there is a char with value over
+ # 127. Everything then gets converted to unicode.
try:
map(lambda col: map(str, col), strcols)
except UnicodeError:
@@ -299,7 +301,7 @@ def to_latex(self, force_unicode=False, column_format=None):
nlevels = frame.index.nlevels
for i, row in enumerate(izip(*strcols)):
if i == nlevels:
- self.buf.write('\\hline\n') # End of header
+ self.buf.write('\\hline\n') # End of header
crow = [(x.replace('_', '\\_')
.replace('%', '\\%')
.replace('&', '\\&') if x else '{}') for x in row]
@@ -424,6 +426,7 @@ def __init__(self, formatter, classes=None):
_bold_row = self.fmt.kwds.get('bold_rows', False)
_temp = '<strong>%s</strong>'
+
def _maybe_bold_row(x):
if _bold_row:
return ([_temp % y for y in x] if isinstance(x, tuple)
@@ -432,7 +435,6 @@ def _maybe_bold_row(x):
return x
self._maybe_bold_row = _maybe_bold_row
-
def write(self, s, indent=0):
self.elements.append(' ' * indent + _str(s))
@@ -474,7 +476,7 @@ def write_result(self, buf):
indent = 0
frame = self.frame
- _classes = ['dataframe'] # Default class.
+ _classes = ['dataframe'] # Default class.
if self.classes is not None:
if isinstance(self.classes, str):
self.classes = self.classes.split()
@@ -485,12 +487,12 @@ def write_result(self, buf):
indent)
if len(frame.columns) == 0 or len(frame.index) == 0:
- self.write('<tbody>', indent + self.indent_delta)
+ self.write('<tbody>', indent + self.indent_delta)
self.write_tr([repr(frame.index),
'Empty %s' % type(frame).__name__],
indent + (2 * self.indent_delta),
self.indent_delta)
- self.write('</tbody>', indent + self.indent_delta)
+ self.write('</tbody>', indent + self.indent_delta)
else:
indent += self.indent_delta
indent = self._write_header(indent)
@@ -535,10 +537,10 @@ def _column_header():
levels = self.columns.format(sparsify=True, adjoin=False,
names=False)
- col_values = self.columns.values
level_lengths = _get_level_lengths(levels)
- for lnum, (records, values) in enumerate(zip(level_lengths, levels)):
+ for lnum, (records, values) in enumerate(
+ zip(level_lengths, levels)):
name = self.columns.names[lnum]
row = ['' if name is None else str(name)]
@@ -659,6 +661,7 @@ def _get_level_lengths(levels):
def _make_grouper():
record = {'count': 0}
+
def grouper(x):
if x != '':
record['count'] += 1
@@ -771,6 +774,7 @@ def _format(x):
return fmt_values
+
class FloatArrayFormatter(GenericArrayFormatter):
"""
@@ -797,7 +801,7 @@ def get_result(self):
if len(fmt_values) > 0:
maxlen = max(len(x) for x in fmt_values)
else:
- maxlen =0
+ maxlen = 0
too_long = maxlen > self.digits + 5
@@ -805,7 +809,7 @@ def get_result(self):
# this is pretty arbitrary for now
has_large_values = (abs_vals > 1e8).any()
- has_small_values = ((abs_vals < 10**(-self.digits)) &
+ has_small_values = ((abs_vals < 10 ** (-self.digits)) &
(abs_vals > 0)).any()
if too_long and has_large_values:
@@ -842,6 +846,7 @@ def get_result(self):
fmt_values = [formatter(x) for x in self.values]
return _make_fixed_width(fmt_values, self.justify)
+
def _format_datetime64(x, tz=None):
if isnull(x):
return 'NaT'
@@ -882,6 +887,7 @@ def just(x):
return [just(x) for x in strings]
+
def _trim_zeros(str_floats, na_rep='NaN'):
"""
Trims zeros and decimal points
@@ -912,6 +918,7 @@ def single_column_table(column, align=None, style=None):
table += '</tbody></table>'
return table
+
def single_row_table(row): # pragma: no cover
table = '<table><tbody><tr>'
for i in row:
@@ -919,6 +926,7 @@ def single_row_table(row): # pragma: no cover
table += '</tr></tbody></table>'
return table
+
def _has_names(index):
if isinstance(index, MultiIndex):
return any([x is not None for x in index.names])
@@ -926,10 +934,10 @@ def _has_names(index):
return index.name is not None
-
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Global formatting options
+
def set_printoptions(precision=None, column_space=None, max_rows=None,
max_columns=None, colheader_justify=None,
max_colwidth=None, notebook_repr_html=None,
@@ -985,9 +993,11 @@ def set_printoptions(precision=None, column_space=None, max_rows=None,
if encoding is not None:
print_config.encoding = encoding
+
def reset_printoptions():
print_config.reset()
+
class EngFormatter(object):
"""
Formats float values according to engineering format.
@@ -1052,7 +1062,7 @@ def __call__(self, num):
dnum = -dnum
if dnum != 0:
- pow10 = decimal.Decimal(int(math.floor(dnum.log10()/3)*3))
+ pow10 = decimal.Decimal(int(math.floor(dnum.log10() / 3) * 3))
else:
pow10 = decimal.Decimal(0)
@@ -1068,16 +1078,17 @@ def __call__(self, num):
else:
prefix = 'E+%02d' % int_pow10
- mant = sign*dnum/(10**pow10)
+ mant = sign * dnum / (10 ** pow10)
if self.accuracy is None: # pragma: no cover
format_str = u"% g%s"
else:
- format_str = (u"%% .%if%%s" % self.accuracy )
+ format_str = (u"%% .%if%%s" % self.accuracy)
formatted = format_str % (mant, prefix)
- return formatted #.strip()
+ return formatted #.strip()
+
def set_eng_float_format(precision=None, accuracy=3, use_eng_prefix=False):
"""
@@ -1087,10 +1098,10 @@ def set_eng_float_format(precision=None, accuracy=3, use_eng_prefix=False):
See also EngFormatter.
"""
- if precision is not None: # pragma: no cover
+ if precision is not None: # pragma: no cover
import warnings
warnings.warn("'precision' parameter in set_eng_float_format is "
- "being renamed to 'accuracy'" , FutureWarning)
+ "being renamed to 'accuracy'", FutureWarning)
accuracy = precision
print_config.float_format = EngFormatter(accuracy, use_eng_prefix)
@@ -1126,17 +1137,17 @@ def detect_encoding(self):
encoding = None
try:
- encoding=sys.stdin.encoding
+ encoding = sys.stdin.encoding
except AttributeError:
pass
- if not encoding or encoding =='ascii': # try again for something better
+ if not encoding or encoding == 'ascii': # try again for better
try:
encoding = locale.getpreferredencoding()
except Exception:
pass
- if not encoding: # when all else fails. this will usually be "ascii"
+ if not encoding: # when all else fails. this will usually be "ascii"
encoding = sys.getdefaultencoding()
return encoding
@@ -1162,7 +1173,7 @@ def _put_lines(buf, lines):
599502.4276, 620921.8593, 620898.5294, 552427.1093,
555221.2193, 519639.7059, 388175.7 , 379199.5854,
614898.25 , 504833.3333, 560600. , 941214.2857,
- 1134250. , 1219550. , 855736.85 , 1042615.4286,
+ 1134250. , 1219550. , 855736.85 , 1042615.4286,
722621.3043, 698167.1818, 803750. ])
fmt = FloatArrayFormatter(arr, digits=7)
print fmt.get_result()
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c18010ab9578e..06a290b6edfaf 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -24,8 +24,8 @@
import numpy as np
import numpy.ma as ma
-from pandas.core.common import (isnull, notnull, PandasError, _try_sort,\
- _default_index,_is_sequence)
+from pandas.core.common import (isnull, notnull, PandasError, _try_sort,
+ _default_index, _is_sequence)
from pandas.core.generic import NDFrame
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import _NDFrameIndexer, _maybe_droplevels
@@ -170,11 +170,14 @@
# Custom error class for update
-class DataConflictError(Exception): pass
+
+class DataConflictError(Exception):
+ pass
#----------------------------------------------------------------------
# Factory helper methods
+
def _arith_method(op, name, default_axis='columns'):
def na_op(x, y):
try:
@@ -228,6 +231,7 @@ def f(self, other, axis=default_axis, level=None, fill_value=None):
return f
+
def _flex_comp_method(op, name, default_axis='columns'):
def na_op(x, y):
@@ -733,7 +737,9 @@ def dot(self, other):
lvals = left.values
rvals = right.values
if isinstance(other, DataFrame):
- return DataFrame(np.dot(lvals, rvals), index=self.index, columns=other.columns)
+ return DataFrame(np.dot(lvals, rvals),
+ index=self.index,
+ columns=other.columns)
elif isinstance(other, Series):
return Series(np.dot(lvals, rvals), index=left.index)
else:
@@ -798,8 +804,8 @@ def to_dict(self, outtype='dict'):
elif outtype.lower().startswith('l'):
return dict((k, v.tolist()) for k, v in self.iteritems())
elif outtype.lower().startswith('s'):
- return dict((k, v) for k,v in self.iteritems())
- else: # pragma: no cover
+ return dict((k, v) for k, v in self.iteritems())
+ else: # pragma: no cover
raise ValueError("outtype %s not understood" % outtype)
@classmethod
@@ -833,7 +839,8 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
columns = list(columns)
if len(algos.unique(columns)) < len(columns):
- raise ValueError('Non-unique columns not yet supported in from_records')
+ raise ValueError('Non-unique columns not yet supported in '
+ 'from_records')
if names is not None: # pragma: no cover
columns = names
@@ -1166,7 +1173,6 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
f = com._get_handle(path_or_buf, mode, encoding=encoding)
close = True
-
if quoting is None:
quoting = csv.QUOTE_MINIMAL
@@ -2019,7 +2025,7 @@ def xs(self, key, axis=0, level=None, copy=True):
new_values = self._data.fast_2d_xs(loc, copy=copy)
return Series(new_values, index=self.columns,
name=self.index[loc])
- else: # isinstance(loc, slice) or loc.dtype == np.bool_:
+ else: # isinstance(loc, slice) or loc.dtype == np.bool_:
result = self[loc]
result.index = new_index
return result
@@ -2488,15 +2494,15 @@ def reset_index(self, level=None, drop=False, inplace=False, col_level=0,
labels are inserted into. By default it is inserted into the first
level.
col_fill : object, default ''
- If the columns have multiple levels, determines how the other levels
- are named. If None then the index name is repeated.
+ If the columns have multiple levels, determines how the other
+ levels are named. If None then the index name is repeated.
Returns
-------
resetted : DataFrame
"""
if inplace:
- new_obj = self
+ new_obj = self
else:
new_obj = self.copy()
@@ -2786,7 +2792,7 @@ def sort(self, columns=None, column=None, axis=0, ascending=True,
-------
sorted : DataFrame
"""
- if column is not None: # pragma: no cover
+ if column is not None: # pragma: no cover
import warnings
warnings.warn("column is deprecated, use columns", FutureWarning)
columns = column
@@ -3048,7 +3054,7 @@ def replace(self, to_replace, value=None, method='pad', axis=0,
return self
if isinstance(to_replace, dict):
- if isinstance(value, dict): # {'A' : np.nan} -> {'A' : 0}
+ if isinstance(value, dict): # {'A' : np.nan} -> {'A' : 0}
return self._replace_both_dict(to_replace, value, inplace)
elif not isinstance(value, (list, np.ndarray)):
@@ -3067,7 +3073,7 @@ def replace(self, to_replace, value=None, method='pad', axis=0,
new_data = self._data if inplace else self.copy()._data
new_data._replace_list(to_replace, value)
- else: # [np.nan, ''] -> 0
+ else: # [np.nan, ''] -> 0
new_data = self._data.replace(to_replace, value,
inplace=inplace)
@@ -3077,9 +3083,9 @@ def replace(self, to_replace, value=None, method='pad', axis=0,
else:
return self._constructor(new_data)
else:
- if isinstance(value, dict): # np.nan -> {'A' : 0, 'B' : -1}
+ if isinstance(value, dict): # np.nan -> {'A' : 0, 'B' : -1}
return self._replace_dest_dict(to_replace, value, inplace)
- elif not isinstance(value, (list, np.ndarray)): # np.nan -> 0
+ elif not isinstance(value, (list, np.ndarray)): # np.nan -> 0
new_data = self._data.replace(to_replace, value,
inplace=inplace)
if inplace:
@@ -3089,7 +3095,7 @@ def replace(self, to_replace, value=None, method='pad', axis=0,
return self._constructor(new_data)
raise ValueError('Invalid to_replace type: %s' %
- type(to_replace)) # pragma: no cover
+ type(to_replace)) # pragma: no cover
def _interpolate(self, to_replace, method, axis, inplace, limit):
if self._is_mixed_type and axis == 1:
@@ -3833,7 +3839,7 @@ def _apply_standard(self, func, axis, ignore_failures=False):
if hasattr(e, 'args'):
k = res_index[i]
e.args = e.args + ('occurred at index %s' % str(k),)
- except NameError: # pragma: no cover
+ except NameError: # pragma: no cover
# no k defined yet
pass
raise
@@ -4076,7 +4082,7 @@ def corr(self, method='pearson'):
correl = np.empty((K, K), dtype=float)
mask = np.isfinite(mat)
for i, ac in enumerate(mat):
- for j, bc in enumerate(mat):
+ for j, bc in enumerate(mat):
valid = mask[i] & mask[j]
if not valid.any():
c = np.nan
@@ -4183,7 +4189,7 @@ def describe(self, percentile_width=50):
for k, v in self.iteritems()),
columns=self.columns)
- lb = .5 * (1. - percentile_width/100.)
+ lb = .5 * (1. - percentile_width / 100.)
ub = 1. - lb
def pretty_name(x):
@@ -4448,7 +4454,6 @@ def skew(self, axis=0, skipna=True, level=None):
return self._reduce(nanops.nanskew, axis=axis, skipna=skipna,
numeric_only=None)
-
@Substitution(name='unbiased kurtosis', shortname='kurt',
na_action=_doc_exclude_na, extras='')
@Appender(_stat_doc)
@@ -4971,6 +4976,7 @@ def _to_sdict(data, columns, coerce_float=False):
data = map(tuple, data)
return _list_to_sdict(data, columns, coerce_float=coerce_float)
+
def _list_to_sdict(data, columns, coerce_float=False):
if len(data) > 0 and isinstance(data[0], tuple):
content = list(lib.to_object_array_tuples(data).T)
@@ -4984,6 +4990,7 @@ def _list_to_sdict(data, columns, coerce_float=False):
return _convert_object_array(content, columns,
coerce_float=coerce_float)
+
def _list_of_series_to_sdict(data, columns, coerce_float=False):
from pandas.core.index import _get_combined_index
@@ -5038,6 +5045,7 @@ def _convert_object_array(content, columns, coerce_float=False):
for c, vals in zip(columns, content))
return sdict, columns
+
def _get_names_from_index(data):
index = range(len(data))
has_some_name = any([s.name is not None for s in data])
@@ -5055,6 +5063,7 @@ def _get_names_from_index(data):
return index
+
def _homogenize(data, index, columns, dtype=None):
from pandas.core.series import _sanitize_array
@@ -5109,6 +5118,7 @@ def _homogenize(data, index, columns, dtype=None):
def _put_str(s, space):
return ('%s' % s)[:space].ljust(space)
+
def install_ipython_completers(): # pragma: no cover
"""Register the DataFrame type with IPython's tab completion machinery, so
that it knows about accessing column names as attributes."""
@@ -5136,6 +5146,7 @@ def complete_dataframe(obj, prev_completions):
DataFrame.plot = gfx.plot_frame
DataFrame.hist = gfx.hist_frame
+
def boxplot(self, column=None, by=None, ax=None, fontsize=None,
rot=0, grid=True, **kwds):
"""
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f0c70522da8a0..97b10e532c9de 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8,6 +8,7 @@
from pandas.tseries.offsets import DateOffset
import pandas.core.common as com
+
class PandasError(Exception):
pass
@@ -15,8 +16,8 @@ class PandasError(Exception):
class PandasObject(object):
_AXIS_NUMBERS = {
- 'index' : 0,
- 'columns' : 1
+ 'index': 0,
+ 'columns': 1
}
_AXIS_ALIASES = {}
@@ -274,7 +275,7 @@ def select(self, crit, axis=0):
else:
new_axis = axis
- return self.reindex(**{axis_name : new_axis})
+ return self.reindex(**{axis_name: new_axis})
def drop(self, labels, axis=0, level=None):
"""
@@ -300,7 +301,7 @@ def drop(self, labels, axis=0, level=None):
else:
new_axis = axis.drop(labels)
- return self.reindex(**{axis_name : new_axis})
+ return self.reindex(**{axis_name: new_axis})
def sort_index(self, axis=0, ascending=True):
"""
@@ -326,7 +327,7 @@ def sort_index(self, axis=0, ascending=True):
sort_index = sort_index[::-1]
new_axis = labels.take(sort_index)
- return self.reindex(**{axis_name : new_axis})
+ return self.reindex(**{axis_name: new_axis})
@property
def ix(self):
@@ -483,13 +484,13 @@ def _clear_item_cache(self):
self._item_cache.clear()
def _set_item(self, key, value):
- if hasattr(self,'columns') and isinstance(self.columns, MultiIndex):
+ if hasattr(self, 'columns') and isinstance(self.columns, MultiIndex):
# Pad the key with empty strings if lower levels of the key
# aren't specified:
if not isinstance(key, tuple):
key = (key,)
if len(key) != self.columns.nlevels:
- key += ('',)*(self.columns.nlevels - len(key))
+ key += ('',) * (self.columns.nlevels - len(key))
self._data.set(key, value)
try:
@@ -504,7 +505,7 @@ def __delitem__(self, key):
deleted = False
maybe_shortcut = False
- if hasattr(self,'columns') and isinstance(self.columns, MultiIndex):
+ if hasattr(self, 'columns') and isinstance(self.columns, MultiIndex):
try:
maybe_shortcut = key not in self.columns._engine
except TypeError:
@@ -513,10 +514,10 @@ def __delitem__(self, key):
if maybe_shortcut:
# Allow shorthand to delete all columns whose first len(key)
# elements match key:
- if not isinstance(key,tuple):
+ if not isinstance(key, tuple):
key = (key,)
for col in self.columns:
- if isinstance(col,tuple) and col[:len(key)] == key:
+ if isinstance(col, tuple) and col[:len(key)] == key:
del self[col]
deleted = True
if not deleted:
@@ -702,7 +703,7 @@ def cummax(self, axis=None, skipna=True):
if skipna:
np.putmask(result, mask, np.nan)
else:
- result = np.maximum.accumulate(y,axis)
+ result = np.maximum.accumulate(y, axis)
return self._wrap_array(result, self.axes, copy=False)
def cummin(self, axis=None, skipna=True):
@@ -738,7 +739,7 @@ def cummin(self, axis=None, skipna=True):
if skipna:
np.putmask(result, mask, np.nan)
else:
- result = np.minimum.accumulate(y,axis)
+ result = np.minimum.accumulate(y, axis)
return self._wrap_array(result, self.axes, copy=False)
def copy(self, deep=True):
@@ -934,6 +935,7 @@ def tz_localize(self, tz, axis=0, copy=True):
# Good for either Series or DataFrame
+
def truncate(self, before=None, after=None, copy=True):
"""Function truncate a sorted DataFrame / Series before and/or after
some particular dates.
@@ -965,4 +967,3 @@ def truncate(self, before=None, after=None, copy=True):
result = result.copy()
return result
-
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index f0f6f7b2a8c63..e158e164e4a3b 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -18,12 +18,15 @@
class GroupByError(Exception):
pass
+
class DataError(GroupByError):
pass
+
class SpecificationError(GroupByError):
pass
+
def _groupby_function(name, alias, npfunc, numeric_only=True):
def f(self):
try:
@@ -36,6 +39,7 @@ def f(self):
return f
+
def _first_compat(x, axis=0):
x = np.asarray(x)
x = x[com.notnull(x)]
@@ -43,6 +47,7 @@ def _first_compat(x, axis=0):
return np.nan
return x[0]
+
def _last_compat(x, axis=0):
x = np.asarray(x)
x = x[com.notnull(x)]
@@ -166,7 +171,7 @@ def indices(self):
@property
def name(self):
if self._selection is None:
- return None # 'result'
+ return None # 'result'
else:
return self._selection
@@ -205,6 +210,7 @@ def wrapper(*args, **kwargs):
def curried_with_axis(x):
return f(x, *args, **kwargs_with_axis)
+
def curried(x):
return f(x, *args, **kwargs)
@@ -458,6 +464,7 @@ def _concat_objects(self, keys, values, not_indexed_same=False):
return result
+
def _generate_groups(obj, group_index, ngroups, axis=0):
if isinstance(obj, NDFrame) and not isinstance(obj, DataFrame):
factory = obj._constructor
@@ -468,23 +475,26 @@ def _generate_groups(obj, group_index, ngroups, axis=0):
return generate_groups(obj, group_index, ngroups,
axis=axis, factory=factory)
+
@Appender(GroupBy.__doc__)
def groupby(obj, by, **kwds):
if isinstance(obj, Series):
klass = SeriesGroupBy
elif isinstance(obj, DataFrame):
klass = DataFrameGroupBy
- else: # pragma: no cover
+ else: # pragma: no cover
raise TypeError('invalid type: %s' % type(obj))
return klass(obj, by, **kwds)
+
def _get_axes(group):
if isinstance(group, Series):
return [group.index]
else:
return group.axes
+
def _is_indexed_like(obj, axes):
if isinstance(obj, Series):
if len(axes) > 1:
@@ -495,6 +505,7 @@ def _is_indexed_like(obj, axes):
return False
+
class Grouper(object):
"""
@@ -531,7 +542,7 @@ def get_iterator(self, data, axis=0):
groups = indices.keys()
try:
groups = sorted(groups)
- except Exception: # pragma: no cover
+ except Exception: # pragma: no cover
pass
for name in groups:
@@ -658,29 +669,29 @@ def get_group_levels(self):
# Aggregation functions
_cython_functions = {
- 'add' : lib.group_add,
- 'prod' : lib.group_prod,
- 'min' : lib.group_min,
- 'max' : lib.group_max,
- 'mean' : lib.group_mean,
- 'median' : lib.group_median,
- 'var' : lib.group_var,
- 'std' : lib.group_var,
+ 'add': lib.group_add,
+ 'prod': lib.group_prod,
+ 'min': lib.group_min,
+ 'max': lib.group_max,
+ 'mean': lib.group_mean,
+ 'median': lib.group_median,
+ 'var': lib.group_var,
+ 'std': lib.group_var,
'first': lambda a, b, c, d: lib.group_nth(a, b, c, d, 1),
'last': lib.group_last
}
_cython_object_functions = {
- 'first' : lambda a, b, c, d: lib.group_nth_object(a, b, c, d, 1),
- 'last' : lib.group_last_object
+ 'first': lambda a, b, c, d: lib.group_nth_object(a, b, c, d, 1),
+ 'last': lib.group_last_object
}
_cython_transforms = {
- 'std' : np.sqrt
+ 'std': np.sqrt
}
_cython_arity = {
- 'ohlc' : 4, # OHLC
+ 'ohlc': 4, # OHLC
}
_name_functions = {}
@@ -840,18 +851,17 @@ def generate_bins_generic(values, binner, closed):
if values[0] < binner[0]:
raise ValueError("Values falls before first bin")
- if values[lenidx-1] > binner[lenbin-1]:
+ if values[lenidx - 1] > binner[lenbin - 1]:
raise ValueError("Values falls after last bin")
- bins = np.empty(lenbin - 1, dtype=np.int64)
+ bins = np.empty(lenbin - 1, dtype=np.int64)
- j = 0 # index into values
- bc = 0 # bin count
+ j = 0 # index into values
+ bc = 0 # bin count
- # linear scan, presume nothing about values/binner except that it
- # fits ok
- for i in range(0, lenbin-1):
- r_bin = binner[i+1]
+ # linear scan, presume nothing about values/binner except that it fits ok
+ for i in range(0, lenbin - 1):
+ r_bin = binner[i + 1]
# count values in current bin, advance to next bin
while j < lenidx and (values[j] < r_bin or
@@ -921,25 +931,25 @@ def names(self):
# cython aggregation
_cython_functions = {
- 'add' : lib.group_add_bin,
- 'prod' : lib.group_prod_bin,
- 'mean' : lib.group_mean_bin,
- 'min' : lib.group_min_bin,
- 'max' : lib.group_max_bin,
- 'var' : lib.group_var_bin,
- 'std' : lib.group_var_bin,
- 'ohlc' : lib.group_ohlc,
+ 'add': lib.group_add_bin,
+ 'prod': lib.group_prod_bin,
+ 'mean': lib.group_mean_bin,
+ 'min': lib.group_min_bin,
+ 'max': lib.group_max_bin,
+ 'var': lib.group_var_bin,
+ 'std': lib.group_var_bin,
+ 'ohlc': lib.group_ohlc,
'first': lambda a, b, c, d: lib.group_nth_bin(a, b, c, d, 1),
'last': lib.group_last_bin
}
_cython_object_functions = {
- 'first' : lambda a, b, c, d: lib.group_nth_bin_object(a, b, c, d, 1),
- 'last' : lib.group_last_bin_object
+ 'first': lambda a, b, c, d: lib.group_nth_bin_object(a, b, c, d, 1),
+ 'last': lib.group_last_bin_object
}
_name_functions = {
- 'ohlc' : lambda *args: ['open', 'high', 'low', 'close']
+ 'ohlc': lambda *args: ['open', 'high', 'low', 'close']
}
_filter_empty_groups = True
@@ -1105,6 +1115,7 @@ def _make_labels(self):
self._counts = counts
_groups = None
+
@property
def groups(self):
if self._groups is None:
@@ -1184,9 +1195,11 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True):
return grouper, exclusions
+
def _is_label_like(val):
return isinstance(val, basestring) or np.isscalar(val)
+
def _convert_grouper(axis, grouper):
if isinstance(grouper, dict):
return grouper.get
@@ -1201,6 +1214,7 @@ def _convert_grouper(axis, grouper):
else:
return grouper
+
class SeriesGroupBy(GroupBy):
def aggregate(self, func_or_funcs, *args, **kwargs):
@@ -1257,7 +1271,7 @@ def aggregate(self, func_or_funcs, *args, **kwargs):
if isinstance(func_or_funcs, basestring):
return getattr(self, func_or_funcs)(*args, **kwargs)
- if hasattr(func_or_funcs,'__iter__'):
+ if hasattr(func_or_funcs, '__iter__'):
ret = self._aggregate_multiple_funcs(func_or_funcs)
else:
cyfunc = _intercept_cython(func_or_funcs)
@@ -1393,6 +1407,7 @@ def transform(self, func, *args, **kwargs):
return result
+
class NDFrameGroupBy(GroupBy):
def _iterate_slices(self):
@@ -1686,7 +1701,7 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
if (isinstance(values[0], Series) and
not _all_indexes_same([x.index for x in values])):
return self._concat_objects(keys, values,
- not_indexed_same=not_indexed_same)
+ not_indexed_same=not_indexed_same)
if self.axis == 0:
stacked_values = np.vstack([np.asarray(x)
@@ -1743,7 +1758,7 @@ def transform(self, func, *args, **kwargs):
res = group.apply(wrapper, axis=self.axis)
except TypeError:
return self._transform_item_by_item(obj, wrapper)
- except Exception: # pragma: no cover
+ except Exception: # pragma: no cover
res = wrapper(group)
# broadcasting
@@ -1892,6 +1907,7 @@ def _wrap_agged_blocks(self, blocks):
from pandas.tools.plotting import boxplot_frame_groupby
DataFrameGroupBy.boxplot = boxplot_frame_groupby
+
class PanelGroupBy(NDFrameGroupBy):
def _iterate_slices(self):
@@ -2023,6 +2039,7 @@ def _get_slice(slob):
assert(start < end)
yield i, _get_slice(slice(start, end))
+
def get_group_index(label_list, shape):
"""
For the particular label_list, gets the offsets into the hypothetical list
@@ -2036,7 +2053,7 @@ def get_group_index(label_list, shape):
group_index = np.zeros(n, dtype=np.int64)
mask = np.zeros(n, dtype=bool)
for i in xrange(len(shape)):
- stride = np.prod([x for x in shape[i+1:]], dtype=np.int64)
+ stride = np.prod([x for x in shape[i + 1:]], dtype=np.int64)
group_index += com._ensure_int64(label_list[i]) * stride
mask |= label_list[i] < 0
@@ -2051,6 +2068,7 @@ def _int64_overflow_possible(shape):
return the_prod >= _INT64_MAX
+
def decons_group_index(comp_labels, shape):
# reconstruct labels
label_list = []
@@ -2099,6 +2117,7 @@ def _lexsort_indexer(keys):
shape.append(len(rizer.uniques))
return _indexer_from_factorized(labels, shape)
+
class _KeyMapper(object):
"""
Ease my suffering. Map compressed group id -> key tuple
@@ -2139,6 +2158,7 @@ def _get_indices_dict(label_list, keys):
#----------------------------------------------------------------------
# sorting levels...cleverly?
+
def _compress_group_index(group_index, sort=True):
"""
Group_index is offsets into cartesian product of all possible labels. This
@@ -2162,6 +2182,7 @@ def _compress_group_index(group_index, sort=True):
return comp_ids, obs_group_ids
+
def _reorder_by_uniques(uniques, labels):
# sorter is index where elements ought to go
sorter = uniques.argsort()
@@ -2196,15 +2217,19 @@ def _reorder_by_uniques(uniques, labels):
np.var: 'var'
}
+
def _intercept_function(func):
return _func_table.get(func, func)
+
def _intercept_cython(func):
return _cython_table.get(func)
+
def _groupby_indices(values):
return lib.groupby_indices(com._ensure_object(values))
+
def numpy_groupby(data, labels, axis=0):
s = np.argsort(labels)
keys, inv = np.unique(labels, return_inverse=True)
@@ -2222,6 +2247,7 @@ def numpy_groupby(data, labels, axis=0):
from pandas.util import py3compat
import sys
+
def install_ipython_completers(): # pragma: no cover
"""Register the DataFrame type with IPython's tab completion machinery, so
that it knows about accessing column names as attributes."""
@@ -2229,7 +2255,7 @@ def install_ipython_completers(): # pragma: no cover
@complete_object.when_type(DataFrameGroupBy)
def complete_dataframe(obj, prev_completions):
- return prev_completions + [c for c in obj.obj.columns \
+ return prev_completions + [c for c in obj.obj.columns
if isinstance(c, basestring) and py3compat.isidentifier(c)]
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 08d1c593d42ca..c94b3baee1f26 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -27,7 +27,7 @@ def wrapper(self, other):
result = func(other)
try:
return result.view(np.ndarray)
- except: # pragma: no cover
+ except: # pragma: no cover
return result
return wrapper
@@ -137,7 +137,7 @@ def __repr__(self):
prepr = com.pprint_thing(self)
else:
prepr = com.pprint_thing_encoded(self)
- return 'Index(%s, dtype=%s)' % (prepr,self.dtype)
+ return 'Index(%s, dtype=%s)' % (prepr, self.dtype)
def astype(self, dtype):
return Index(self.values.astype(dtype), name=self.name,
@@ -570,7 +570,7 @@ def union(self, other):
# contained in
try:
result = np.sort(self.values)
- except TypeError: # pragma: no cover
+ except TypeError: # pragma: no cover
result = self.values
# for subclasses
@@ -1027,7 +1027,7 @@ def _join_monotonic(self, other, how='left', return_indexers=False):
lidx = self._left_indexer_unique(ov, sv)
ridx = None
elif how == 'inner':
- join_index, lidx, ridx = self._inner_indexer(sv,ov)
+ join_index, lidx, ridx = self._inner_indexer(sv, ov)
join_index = self._wrap_joined_index(join_index, other)
elif how == 'outer':
join_index, lidx, ridx = self._outer_indexer(sv, ov)
@@ -1229,8 +1229,6 @@ def _wrap_joined_index(self, joined, other):
return Int64Index(joined, name=name)
-
-
class MultiIndex(Index):
"""
Implements multi-level, a.k.a. hierarchical, index object for pandas
@@ -1291,11 +1289,12 @@ def __new__(cls, levels=None, labels=None, sortorder=None, names=None):
def __array_finalize__(self, obj):
"""
- Update custom MultiIndex attributes when a new array is created by numpy,
- e.g. when calling ndarray.view()
+ Update custom MultiIndex attributes when a new array is created by
+ numpy, e.g. when calling ndarray.view()
"""
if not isinstance(obj, type(self)):
- # Only relevant if this array is being created from an Index instance.
+ # Only relevant if this array is being created from an Index
+ # instance.
return
self.levels = list(getattr(obj, 'levels', []))
@@ -1345,7 +1344,7 @@ def _from_elements(values, labels=None, levels=None, names=None,
index = values.view(MultiIndex)
index.levels = levels
index.labels = labels
- index.names = names
+ index.names = names
index.sortorder = sortorder
return index
@@ -1412,7 +1411,7 @@ def has_duplicates(self):
shape = [len(lev) for lev in self.levels]
group_index = np.zeros(len(self), dtype='i8')
for i in xrange(len(shape)):
- stride = np.prod([x for x in shape[i+1:]], dtype='i8')
+ stride = np.prod([x for x in shape[i + 1:]], dtype='i8')
group_index += self.labels[i] * stride
if len(np.unique(group_index)) < len(group_index):
@@ -1587,7 +1586,7 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
if isinstance(tuples, np.ndarray):
if isinstance(tuples, Index):
- tuples = tuples.values
+ tuples = tuples.values
arrays = list(lib.tuples_to_object_array(tuples).T)
elif isinstance(tuples, list):
@@ -2430,10 +2429,12 @@ def _ensure_index(index_like):
return Index(index_like)
+
def _validate_join_method(method):
if method not in ['left', 'right', 'inner', 'outer']:
raise Exception('do not recognize join method %s' % method)
+
# TODO: handle index names!
def _get_combined_index(indexes, intersect=False):
indexes = _get_distinct_indexes(indexes)
@@ -2507,6 +2508,7 @@ def _sanitize_and_check(indexes):
else:
return indexes, 'array'
+
def _handle_legacy_indexes(indexes):
from pandas.core.daterange import DateRange
from pandas.tseries.index import DatetimeIndex
@@ -2526,6 +2528,7 @@ def _handle_legacy_indexes(indexes):
return converted
+
def _get_consensus_names(indexes):
consensus_name = indexes[0].names
for index in indexes[1:]:
@@ -2534,6 +2537,7 @@ def _get_consensus_names(indexes):
break
return consensus_name
+
def _maybe_box(idx):
from pandas.tseries.api import DatetimeIndex, PeriodIndex
klasses = DatetimeIndex, PeriodIndex
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 13fa0b2af1adc..531431c065082 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -10,6 +10,7 @@
# "null slice"
_NS = slice(None, None)
+
def _is_sequence(x):
try:
iter(x)
@@ -18,6 +19,7 @@ def _is_sequence(x):
except Exception:
return False
+
class IndexingError(Exception):
pass
@@ -587,6 +589,7 @@ def _get_slice_axis(self, slice_obj, axis=0):
# 32-bit floating point machine epsilon
_eps = np.finfo('f4').eps
+
def _is_index_slice(obj):
def _is_valid_index(x):
return (com.is_integer(x) or com.is_float(x)
@@ -599,6 +602,7 @@ def _crit(v):
return not both_none and (_crit(obj.start) and _crit(obj.stop))
+
def _is_int_slice(obj):
def _is_valid_index(x):
return com.is_integer(x)
@@ -610,6 +614,7 @@ def _crit(v):
return not both_none and (_crit(obj.start) and _crit(obj.stop))
+
def _is_float_slice(obj):
def _is_valid_index(x):
return com.is_float(x)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index c0f2ba7654e80..9a1785b9518af 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -12,11 +12,13 @@
except ImportError: # pragma: no cover
_USE_BOTTLENECK = False
+
def _bottleneck_switch(bn_name, alt, zero_value=None, **kwargs):
try:
bn_func = getattr(bn, bn_name)
except (AttributeError, NameError): # pragma: no cover
bn_func = None
+
def f(values, axis=None, skipna=True, **kwds):
if len(kwargs) > 0:
for k, v in kwargs.iteritems():
@@ -46,6 +48,7 @@ def f(values, axis=None, skipna=True, **kwds):
return f
+
def _has_infs(result):
if isinstance(result, np.ndarray):
if result.dtype == 'f8':
@@ -57,6 +60,7 @@ def _has_infs(result):
else:
return np.isinf(result) or np.isneginf(result)
+
def nanany(values, axis=None, skipna=True):
mask = isnull(values)
@@ -65,6 +69,7 @@ def nanany(values, axis=None, skipna=True):
np.putmask(values, mask, False)
return values.any(axis)
+
def nanall(values, axis=None, skipna=True):
mask = isnull(values)
@@ -73,6 +78,7 @@ def nanall(values, axis=None, skipna=True):
np.putmask(values, mask, True)
return values.all(axis)
+
def _nansum(values, axis=None, skipna=True):
mask = isnull(values)
@@ -85,6 +91,7 @@ def _nansum(values, axis=None, skipna=True):
return the_sum
+
def _nanmean(values, axis=None, skipna=True):
mask = isnull(values)
@@ -104,6 +111,7 @@ def _nanmean(values, axis=None, skipna=True):
the_mean = the_sum / count if count > 0 else np.nan
return the_mean
+
def _nanmedian(values, axis=None, skipna=True):
def get_median(x):
mask = notnull(x)
@@ -119,6 +127,7 @@ def get_median(x):
else:
return get_median(values)
+
def _nanvar(values, axis=None, skipna=True, ddof=1):
mask = isnull(values)
@@ -135,6 +144,7 @@ def _nanvar(values, axis=None, skipna=True, ddof=1):
XX = _ensure_numeric((values ** 2).sum(axis))
return np.fabs((XX - X ** 2 / count) / (count - ddof))
+
def _nanmin(values, axis=None, skipna=True):
mask = isnull(values)
if skipna and not issubclass(values.dtype.type,
@@ -160,6 +170,7 @@ def _nanmin(values, axis=None, skipna=True):
return _maybe_null_out(result, axis, mask)
+
def _nanmax(values, axis=None, skipna=True):
mask = isnull(values)
if skipna and not issubclass(values.dtype.type,
@@ -186,6 +197,7 @@ def _nanmax(values, axis=None, skipna=True):
return _maybe_null_out(result, axis, mask)
+
def nanargmax(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
@@ -198,6 +210,7 @@ def nanargmax(values, axis=None, skipna=True):
result = _maybe_arg_null_out(result, axis, mask, skipna)
return result
+
def nanargmin(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
@@ -217,6 +230,7 @@ def nanargmin(values, axis=None, skipna=True):
nanmin = _bottleneck_switch('nanmin', _nanmin)
nanmax = _bottleneck_switch('nanmax', _nanmax)
+
def nanskew(values, axis=None, skipna=True):
if not isinstance(values.dtype.type, np.floating):
values = values.astype('f8')
@@ -249,6 +263,7 @@ def nanskew(values, axis=None, skipna=True):
return np.nan
return result
+
def nankurt(values, axis=None, skipna=True):
if not isinstance(values.dtype.type, np.floating):
values = values.astype('f8')
@@ -269,8 +284,8 @@ def nankurt(values, axis=None, skipna=True):
C = _zero_out_fperr(C)
D = _zero_out_fperr(D)
- result = (((count*count - 1.)*D / (B*B) - 3*((count-1.)**2)) /
- ((count - 2.)*(count-3.)))
+ result = (((count * count - 1.) * D / (B * B) - 3 * ((count - 1.) ** 2)) /
+ ((count - 2.) * (count - 3.)))
if isinstance(result, np.ndarray):
result = np.where(B == 0, 0, result)
result[count < 4] = np.nan
@@ -281,6 +296,7 @@ def nankurt(values, axis=None, skipna=True):
return np.nan
return result
+
def nanprod(values, axis=None, skipna=True):
mask = isnull(values)
if skipna and not issubclass(values.dtype.type, np.integer):
@@ -289,6 +305,7 @@ def nanprod(values, axis=None, skipna=True):
result = values.prod(axis)
return _maybe_null_out(result, axis, mask)
+
def _maybe_arg_null_out(result, axis, mask, skipna):
# helper function for nanargmin/nanargmax
if axis is None:
@@ -307,6 +324,7 @@ def _maybe_arg_null_out(result, axis, mask, skipna):
result[na_mask] = -1
return result
+
def _get_counts(mask, axis):
if axis is not None:
count = (mask.shape[axis] - mask.sum(axis)).astype(float)
@@ -315,6 +333,7 @@ def _get_counts(mask, axis):
return count
+
def _maybe_null_out(result, axis, mask):
if axis is not None:
null_mask = (mask.shape[axis] - mask.sum(axis)) == 0
@@ -328,12 +347,14 @@ def _maybe_null_out(result, axis, mask):
return result
+
def _zero_out_fperr(arg):
if isinstance(arg, np.ndarray):
return np.where(np.abs(arg) < 1e-14, 0, arg)
else:
return 0 if np.abs(arg) < 1e-14 else arg
+
def nancorr(a, b, method='pearson'):
"""
a, b: ndarrays
@@ -351,27 +372,31 @@ def nancorr(a, b, method='pearson'):
f = get_corr_func(method)
return f(a, b)
+
def get_corr_func(method):
if method in ['kendall', 'spearman']:
from scipy.stats import kendalltau, spearmanr
def _pearson(a, b):
return np.corrcoef(a, b)[0, 1]
+
def _kendall(a, b):
rs = kendalltau(a, b)
if isinstance(rs, tuple):
return rs[0]
return rs
+
def _spearman(a, b):
return spearmanr(a, b)[0]
_cor_methods = {
- 'pearson' : _pearson,
- 'kendall' : _kendall,
- 'spearman' : _spearman
+ 'pearson': _pearson,
+ 'kendall': _kendall,
+ 'spearman': _spearman
}
return _cor_methods[method]
+
def nancov(a, b):
assert(len(a) == len(b))
@@ -385,6 +410,7 @@ def nancov(a, b):
return np.cov(a, b)[0, 1]
+
def _ensure_numeric(x):
if isinstance(x, np.ndarray):
if x.dtype == np.object_:
@@ -401,6 +427,7 @@ def _ensure_numeric(x):
import operator
+
def make_nancomp(op):
def f(x, y):
xmask = isnull(x)
@@ -424,6 +451,7 @@ def f(x, y):
naneq = make_nancomp(operator.eq)
nanne = make_nancomp(operator.ne)
+
def unique1d(values):
"""
Hash table-based unique
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 211434ab07154..0efbb5284d584 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -29,7 +29,7 @@ def _ensure_like_indices(time, panels):
"""
n_time = len(time)
n_panel = len(panels)
- u_panels = np.unique(panels) # this sorts!
+ u_panels = np.unique(panels) # this sorts!
u_time = np.unique(time)
if len(u_time) == n_time:
time = np.tile(u_time, len(u_panels))
@@ -37,6 +37,7 @@ def _ensure_like_indices(time, panels):
panels = np.repeat(u_panels, len(u_time))
return time, panels
+
def panel_index(time, panels, names=['time', 'panel']):
"""
Returns a multi-index suitable for a panel-like DataFrame
@@ -84,9 +85,11 @@ def panel_index(time, panels, names=['time', 'panel']):
levels = [time_factor.levels, panel_factor.levels]
return MultiIndex(levels, labels, sortorder=None, names=names)
+
class PanelError(Exception):
pass
+
def _arith_method(func, name):
# work only for scalars
@@ -99,6 +102,7 @@ def f(self, other):
f.__name__ = name
return f
+
def _panel_arith_method(op, name):
@Substitution(op)
def f(self, other, axis='items'):
@@ -144,20 +148,20 @@ def f(self, other, axis='items'):
class Panel(NDFrame):
_AXIS_NUMBERS = {
- 'items' : 0,
- 'major_axis' : 1,
- 'minor_axis' : 2
+ 'items': 0,
+ 'major_axis': 1,
+ 'minor_axis': 2
}
_AXIS_ALIASES = {
- 'major' : 'major_axis',
- 'minor' : 'minor_axis'
+ 'major': 'major_axis',
+ 'minor': 'minor_axis'
}
_AXIS_NAMES = {
- 0 : 'items',
- 1 : 'major_axis',
- 2 : 'minor_axis'
+ 0: 'items',
+ 1: 'major_axis',
+ 2: 'minor_axis'
}
# major
@@ -223,7 +227,7 @@ def __init__(self, data=None, items=None, major_axis=None, minor_axis=None,
mgr = self._init_matrix(data, passed_axes, dtype=dtype, copy=copy)
copy = False
dtype = None
- else: # pragma: no cover
+ else: # pragma: no cover
raise PandasError('Panel constructor not properly called!')
NDFrame.__init__(self, mgr, axes=axes, copy=copy, dtype=dtype)
@@ -259,7 +263,7 @@ def _init_dict(self, data, axes, dtype=None):
minor = _extract_axis(data, axis=1)
axes = [items, major, minor]
- reshaped_data = data.copy() # shallow
+ reshaped_data = data.copy() # shallow
item_shape = len(major), len(minor)
for item in items:
@@ -364,7 +368,6 @@ def _init_matrix(self, data, axes, dtype=None, copy=False):
block = make_block(values, items, items)
return BlockManager([block], fixed_axes)
-
#----------------------------------------------------------------------
# Array interface
@@ -561,7 +564,8 @@ def set_value(self, item, major, minor, value):
return result.set_value(item, major, minor, value)
def _box_item_values(self, key, values):
- return DataFrame(values, index=self.major_axis, columns=self.minor_axis)
+ return DataFrame(values, index=self.major_axis,
+ columns=self.minor_axis)
def __getattr__(self, name):
"""After regular attribute access, try looking up the name of an item.
@@ -617,13 +621,13 @@ def __setstate__(self, state):
# old Panel pickle
if isinstance(state, BlockManager):
self._data = state
- elif len(state) == 4: # pragma: no cover
+ elif len(state) == 4: # pragma: no cover
self._unpickle_panel_compat(state)
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('unrecognized pickle')
self._item_cache = {}
- def _unpickle_panel_compat(self, state): # pragma: no cover
+ def _unpickle_panel_compat(self, state): # pragma: no cover
"Unpickle the panel"
_unpickle = com._unpickle_array
vals, items, major, minor = state
@@ -999,7 +1003,7 @@ def swapaxes(self, axis1='major', axis2='minor', copy=True):
if i == j:
raise ValueError('Cannot specify the same axis')
- mapping = {i : j, j : i}
+ mapping = {i: j, j: i}
new_axes = (self._get_axis(mapping.get(k, k))
for k in range(3))
@@ -1267,7 +1271,7 @@ def truncate(self, before=None, after=None, axis='major'):
beg_slice, end_slice = index.slice_locs(before, after)
new_index = index[beg_slice:end_slice]
- return self.reindex(**{axis : new_index})
+ return self.reindex(**{axis: new_index})
def join(self, other, how='left', lsuffix='', rsuffix=''):
"""
@@ -1303,8 +1307,8 @@ def join(self, other, how='left', lsuffix='', rsuffix=''):
return self._constructor(merged_data)
else:
if lsuffix or rsuffix:
- raise ValueError('Suffixes not supported when passing multiple '
- 'panels')
+ raise ValueError('Suffixes not supported when passing '
+ 'multiple panels')
if how == 'left':
how = 'outer'
@@ -1364,6 +1368,7 @@ def _get_join_index(self, other, how):
WidePanel = Panel
LongPanel = DataFrame
+
def _prep_ndarray(values, copy=True):
if not isinstance(values, np.ndarray):
values = np.asarray(values)
@@ -1376,6 +1381,7 @@ def _prep_ndarray(values, copy=True):
assert(values.ndim == 3)
return values
+
def _homogenize_dict(frames, intersect=True, dtype=None):
"""
Conform set of DataFrame-like objects to either an intersection
@@ -1446,6 +1452,7 @@ def _extract_axis(data, axis=0, intersect=False):
def _monotonic(arr):
return not (arr[1:] < arr[:-1]).any()
+
def install_ipython_completers(): # pragma: no cover
"""Register the Panel type with IPython's tab completion machinery, so
that it knows about accessing column names as attributes."""
@@ -1463,4 +1470,3 @@ def complete_dataframe(obj, prev_completions):
install_ipython_completers()
except Exception:
pass
-
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index d8c9087a34437..044870fc4a6f2 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -74,7 +74,7 @@ def __init__(self, values, index, level=-1, value_columns=None):
v = self.level
lshape = self.index.levshape
- self.full_shape = np.prod(lshape[:v] + lshape[v+1:]), lshape[v]
+ self.full_shape = np.prod(lshape[:v] + lshape[v + 1:]), lshape[v]
self._make_sorted_values_labels()
self._make_selectors()
@@ -84,8 +84,8 @@ def _make_sorted_values_labels(self):
labs = self.index.labels
levs = self.index.levels
- to_sort = labs[:v] + labs[v+1:] + [labs[v]]
- sizes = [len(x) for x in levs[:v] + levs[v+1:] + [levs[v]]]
+ to_sort = labs[:v] + labs[v + 1:] + [labs[v]]
+ sizes = [len(x) for x in levs[:v] + levs[v + 1:] + [levs[v]]]
group_index = get_group_index(to_sort, sizes)
max_groups = np.prod(sizes)
@@ -93,7 +93,7 @@ def _make_sorted_values_labels(self):
comp_index, obs_ids = _compress_group_index(group_index)
ngroups = len(obs_ids)
else:
- comp_index, ngroups = group_index, max_groups
+ comp_index, ngroups = group_index, max_groups
indexer = lib.groupsort_indexer(comp_index, ngroups)[0]
indexer = _ensure_platform_int(indexer)
@@ -280,6 +280,7 @@ def _unstack_multiple(data, clocs):
return unstacked
+
def pivot(self, index=None, columns=None, values=None):
"""
See DataFrame.pivot
@@ -292,6 +293,7 @@ def pivot(self, index=None, columns=None, values=None):
index=[self[index], self[columns]])
return indexed.unstack(columns)
+
def pivot_simple(index, columns, values):
"""
Produce 'pivot' table based on 3 columns of this DataFrame.
@@ -324,6 +326,7 @@ def pivot_simple(index, columns, values):
series = series.sortlevel(0)
return series.unstack()
+
def _slow_pivot(index, columns, values):
"""
Produce 'pivot' table based on 3 columns of this DataFrame.
@@ -349,6 +352,7 @@ def _slow_pivot(index, columns, values):
return DataFrame(tree)
+
def unstack(obj, level):
if isinstance(level, (tuple, list)):
return _unstack_multiple(obj, level)
@@ -362,11 +366,12 @@ def unstack(obj, level):
unstacker = _Unstacker(obj.values, obj.index, level=level)
return unstacker.get_result()
+
def _unstack_frame(obj, level):
from pandas.core.internals import BlockManager, make_block
if obj._is_mixed_type:
- unstacker = _Unstacker(np.empty(obj.shape, dtype=bool), # dummy
+ unstacker = _Unstacker(np.empty(obj.shape, dtype=bool), # dummy
obj.index, level=level,
value_columns=obj.columns)
new_columns = unstacker.get_new_columns()
@@ -395,6 +400,7 @@ def _unstack_frame(obj, level):
value_columns=obj.columns)
return unstacker.get_result()
+
def stack(frame, level=-1, dropna=True):
"""
Convert DataFrame to Series with multi-level Index. Columns become the
@@ -437,6 +443,7 @@ def stack(frame, level=-1, dropna=True):
new_index = new_index[mask]
return Series(new_values, index=new_index)
+
def _stack_multi_columns(frame, level=-1, dropna=True):
this = frame.copy()
@@ -491,7 +498,7 @@ def _stack_multi_columns(frame, level=-1, dropna=True):
else:
new_levels = [this.index]
new_labels = [np.arange(N).repeat(levsize)]
- new_names = [this.index.name] # something better?
+ new_names = [this.index.name] # something better?
new_levels.append(frame.columns.levels[level])
new_labels.append(np.tile(np.arange(levsize), N))
@@ -704,8 +711,8 @@ def make_axis_dummies(frame, axis='minor', transform=None):
Column names taken from chosen axis
"""
numbers = {
- 'major' : 0,
- 'minor' : 1
+ 'major': 0,
+ 'minor': 1
}
num = numbers.get(axis, axis)
@@ -722,6 +729,7 @@ def make_axis_dummies(frame, axis='minor', transform=None):
return DataFrame(values, columns=items, index=frame.index)
+
def block2d_to_block3d(values, items, shape, major_labels, minor_labels,
ref_items=None):
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index eca177c4c543b..7a7fc7159ecb4 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -44,6 +44,7 @@
#----------------------------------------------------------------------
# Wrapper function for Series arithmetic methods
+
def _arith_method(op, name):
"""
Wrapper function for Series arithmetic operations, to avoid
@@ -124,7 +125,7 @@ def wrapper(self, other):
name = _maybe_match_name(self, other)
return Series(na_op(self.values, other.values),
index=self.index, name=name)
- elif isinstance(other, DataFrame): # pragma: no cover
+ elif isinstance(other, DataFrame): # pragma: no cover
return NotImplemented
elif isinstance(other, np.ndarray):
return Series(na_op(self.values, np.asarray(other)),
@@ -160,8 +161,8 @@ def na_op(x, y):
if isinstance(y, np.ndarray):
if (x.dtype == np.bool_ and
- y.dtype == np.bool_): # pragma: no cover
- result = op(x, y) # when would this be hit?
+ y.dtype == np.bool_): # pragma: no cover
+ result = op(x, y) # when would this be hit?
else:
x = com._ensure_object(x)
y = com._ensure_object(y)
@@ -187,7 +188,6 @@ def wrapper(self, other):
return wrapper
-
def _radd_compat(left, right):
radd = lambda x, y: y + x
# GH #353, NumPy 1.5.1 workaround
@@ -196,7 +196,7 @@ def _radd_compat(left, right):
except TypeError:
cond = (_np_version_under1p6 and
left.dtype == np.object_)
- if cond: # pragma: no cover
+ if cond: # pragma: no cover
output = np.empty_like(left)
output.flat[:] = [radd(x, right) for x in left.flat]
else:
@@ -239,6 +239,7 @@ def f(self, other, level=None, fill_value=None):
f.__name__ = name
return f
+
def _unbox(func):
@Appender(func.__doc__)
def f(self, *args, **kwargs):
@@ -288,9 +289,10 @@ def f(self, axis=0, dtype=None, out=None, skipna=True, level=None):
#----------------------------------------------------------------------
# Series class
+
class Series(np.ndarray, generic.PandasObject):
_AXIS_NUMBERS = {
- 'index' : 0
+ 'index': 0
}
_AXIS_NAMES = dict((v, k) for k, v in _AXIS_NUMBERS.iteritems())
@@ -322,7 +324,8 @@ def __new__(cls, data=None, index=None, dtype=None, name=None,
elif isinstance(index, PeriodIndex):
data = [data.get(i, nan) for i in index]
else:
- data = lib.fast_multiget(data, index.values, default=np.nan)
+ data = lib.fast_multiget(data, index.values,
+ default=np.nan)
except TypeError:
data = [data.get(i, nan) for i in index]
elif isinstance(data, types.GeneratorType):
@@ -830,7 +833,7 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
if name is None:
df = DataFrame(self)
else:
- df = DataFrame({name : self})
+ df = DataFrame({name: self})
return df.reset_index(level=level, drop=drop)
@@ -1153,7 +1156,7 @@ def max(self, axis=None, out=None, skipna=True, level=None):
@Substitution(name='standard deviation', shortname='stdev',
na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
+ @Appender(_stat_doc +
"""
Normalized by N-1 (unbiased estimator).
""")
@@ -1166,7 +1169,7 @@ def std(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
@Substitution(name='variance', shortname='var',
na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
+ @Appender(_stat_doc +
"""
Normalized by N-1 (unbiased estimator).
""")
@@ -1432,7 +1435,7 @@ def describe(self, percentile_width=50):
lib.Timestamp(top), freq]
else:
- lb = .5 * (1. - percentile_width/100.)
+ lb = .5 * (1. - percentile_width / 100.)
ub = 1. - lb
def pretty_name(x):
@@ -1567,7 +1570,7 @@ def clip_lower(self, threshold):
"""
return np.where(self < threshold, threshold, self)
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Combination
def append(self, to_append, verify_integrity=False):
@@ -1993,6 +1996,7 @@ def map(self, arg, na_action=None):
if na_action == 'ignore':
mask = isnull(values)
+
def map_f(values, f):
return lib.map_infer_mask(values, f, mask.view(np.uint8))
else:
@@ -2245,7 +2249,6 @@ def fillna(self, value=None, method='pad', inplace=False,
return result
-
def replace(self, to_replace, value=None, method='pad', inplace=False,
limit=None):
"""
@@ -2282,15 +2285,15 @@ def replace(self, to_replace, value=None, method='pad', inplace=False,
"""
result = self.copy() if not inplace else self
- def _rep_one(s, to_rep, v): # replace single value
+ def _rep_one(s, to_rep, v): # replace single value
mask = com.mask_missing(s.values, to_rep)
np.putmask(s.values, mask, v)
return s
- def _rep_dict(rs, to_rep): # replace {[src] -> dest}
+ def _rep_dict(rs, to_rep): # replace {[src] -> dest}
all_src = set()
- dd = {} # group by unique destination value
+ dd = {} # group by unique destination value
for s, d in to_rep.iteritems():
dd.setdefault(d, []).append(s)
all_src.add(s)
@@ -2298,12 +2301,12 @@ def _rep_dict(rs, to_rep): # replace {[src] -> dest}
if any(d in all_src for d in dd.keys()):
# don't clobber each other at the cost of temporaries
masks = {}
- for d, sset in dd.iteritems(): # now replace by each dest
+ for d, sset in dd.iteritems(): # now replace by each dest
masks[d] = com.mask_missing(rs.values, sset)
for d, m in masks.iteritems():
np.putmask(rs.values, m, d)
- else: # if no risk of clobbering then simple
+ else: # if no risk of clobbering then simple
for d, sset in dd.iteritems():
_rep_one(rs, sset, d)
return rs
@@ -2316,17 +2319,17 @@ def _rep_dict(rs, to_rep): # replace {[src] -> dest}
if isinstance(to_replace, (list, np.ndarray)):
- if isinstance(value, (list, np.ndarray)): # check same length
+ if isinstance(value, (list, np.ndarray)): # check same length
vl, rl = len(value), len(to_replace)
if vl == rl:
return _rep_dict(result, dict(zip(to_replace, value)))
raise ValueError('Got %d to replace but %d values' % (rl, vl))
- elif value is not None: # otherwise all replaced with same value
+ elif value is not None: # otherwise all replaced with same value
return _rep_one(result, to_replace, value)
- else: # method
+ else: # method
if method is None: # pragma: no cover
raise ValueError('must specify a fill method')
fill_f = _get_fill_func(method)
@@ -2339,7 +2342,6 @@ def _rep_dict(rs, to_rep): # replace {[src] -> dest}
name=self.name)
return result
-
raise ValueError('Unrecognized to_replace type %s' %
type(to_replace))
@@ -2746,9 +2748,10 @@ def str(self):
_INDEX_TYPES = ndarray, Index, list, tuple
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Supplementary functions
+
def remove_na(arr):
"""
Return array containing only true/non-NaN values, possibly empty.
@@ -2769,7 +2772,7 @@ def _try_cast(arr):
except (ValueError, TypeError):
if dtype is not None and raise_cast_failure:
raise
- else: # pragma: no cover
+ else: # pragma: no cover
subarr = np.array(data, dtype=object, copy=copy)
return subarr
@@ -2801,7 +2804,7 @@ def _try_cast(arr):
try:
subarr = _try_cast(data)
except Exception:
- if raise_cast_failure: # pragma: no cover
+ if raise_cast_failure: # pragma: no cover
raise
subarr = np.array(data, dtype=object, copy=copy)
subarr = lib.maybe_convert_objects(subarr)
@@ -2846,6 +2849,7 @@ def _try_cast(arr):
return subarr
+
def _dtype_from_scalar(val):
if isinstance(val, np.datetime64):
# ugly hacklet
@@ -2853,6 +2857,7 @@ def _dtype_from_scalar(val):
return val, np.dtype('M8[ns]')
return val, type(val)
+
def _get_rename_function(mapper):
if isinstance(mapper, (dict, Series)):
def f(x):
@@ -2865,9 +2870,8 @@ def f(x):
return f
-def _resolve_offset(freq, kwds):
- from pandas.core.datetools import getOffset
+def _resolve_offset(freq, kwds):
if 'timeRule' in kwds or 'offset' in kwds:
offset = kwds.get('offset', None)
offset = kwds.get('timeRule', offset)
@@ -2886,6 +2890,7 @@ def _resolve_offset(freq, kwds):
return offset
+
def _get_fill_func(method):
method = com._clean_fill_method(method)
if method == 'pad':
@@ -2904,6 +2909,7 @@ def _get_fill_func(method):
# Put here, otherwise monkey-patching in methods fails
+
class TimeSeries(Series):
def _repr_footer(self):
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 3172c5a395548..cdbeffbbafdd1 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -9,6 +9,7 @@
import pandas.core.common as com
import operator
+
class repeat(object):
def __init__(self, obj):
self.obj = obj
@@ -16,6 +17,7 @@ def __init__(self, obj):
def __getitem__(self, i):
return self.obj
+
class azip(object):
def __init__(self, *args):
self.cols = []
@@ -328,6 +330,7 @@ def f(x):
return _na_map(f, arr)
+
def str_repeat(arr, repeats):
"""
Duplicate each string in the array by indicated number of times
@@ -358,6 +361,7 @@ def rep(x, r):
result = lib.vec_binop(arr, repeats, rep)
return result
+
def str_match(arr, pat, flags=0):
"""
Find groups in each string (from beginning) using passed regular expression
@@ -374,6 +378,7 @@ def str_match(arr, pat, flags=0):
matches : array
"""
regex = re.compile(pat, flags=flags)
+
def f(x):
m = regex.match(x)
if m:
@@ -384,7 +389,6 @@ def f(x):
return _na_map(f, arr)
-
def str_join(arr, sep):
"""
Join lists contained as elements in array, a la str.join
@@ -412,7 +416,6 @@ def str_len(arr):
return _na_map(len, arr)
-
def str_findall(arr, pat, flags=0):
"""
Find all occurrences of pattern or regular expression
@@ -582,6 +585,7 @@ def str_wrap(arr, width=80):
"""
raise NotImplementedError
+
def str_get(arr, i):
"""
Extract element from lists, tuples, or strings in each element in the array
@@ -598,6 +602,7 @@ def str_get(arr, i):
f = lambda x: x[i]
return _na_map(f, arr)
+
def str_decode(arr, encoding):
"""
Decode character string to unicode using indicated encoding
@@ -613,6 +618,7 @@ def str_decode(arr, encoding):
f = lambda x: x.decode(encoding)
return _na_map(f, arr)
+
def str_encode(arr, encoding):
"""
Encode character string to unicode using indicated encoding
@@ -628,6 +634,7 @@ def str_encode(arr, encoding):
f = lambda x: x.encode(encoding)
return _na_map(f, arr)
+
def _noarg_wrapper(f):
def wrapper(self):
result = f(self.series)
@@ -661,6 +668,7 @@ def wrapper3(self, pat, na=np.nan):
return wrapper
+
def copy(source):
"Copy a docstring from another source function (if present)"
def do_copy(target):
diff --git a/pandas/io/data.py b/pandas/io/data.py
index 8753d1dabfba2..e4c1ae9a9817a 100644
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -82,13 +82,14 @@ def get_quote_yahoo(symbols):
if not isinstance(symbols, list):
raise TypeError, "symbols must be a list"
# for codes see: http://www.gummy-stuff.org/Yahoo-data.htm
- codes = {'symbol':'s','last':'l1','change_pct':'p2','PE':'r','time':'t1','short_ratio':'s7'}
- request = str.join('',codes.values()) # code request string
+ codes = {'symbol': 's', 'last': 'l1', 'change_pct': 'p2', 'PE': 'r',
+ 'time': 't1', 'short_ratio': 's7'}
+ request = str.join('',codes.values()) # code request string
header = codes.keys()
data = dict(zip(codes.keys(), [[] for i in range(len(codes))]))
- urlStr = 'http://finance.yahoo.com/d/quotes.csv?s=%s&f=%s' % (str.join('+',symbols), request)
+ urlStr = 'http://finance.yahoo.com/d/quotes.csv?s=%s&f=%s' % (str.join('+', symbols), request)
try:
lines = urllib2.urlopen(urlStr).readlines()
@@ -178,8 +179,8 @@ def get_data_fred(name=None, start=dt.datetime(2010, 1, 1),
url = fred_URL + '%s' % name + \
'/downloaddata/%s' % name + '.csv'
- data = read_csv(urllib.urlopen(url), index_col=0, parse_dates=True, header=None,
- skiprows=1, names=["DATE", name])
+ data = read_csv(urllib.urlopen(url), index_col=0, parse_dates=True,
+ header=None, skiprows=1, names=["DATE", name])
return data.truncate(start, end)
@@ -197,10 +198,10 @@ def get_data_famafrench(name, start=None, end=None):
datasets = {}
for i in range(len(file_edges) - 1):
- dataset = [d.split() for d in data[(file_edges[i] + 1):file_edges[i+1]]]
+ dataset = [d.split() for d in data[(file_edges[i] + 1):file_edges[i + 1]]]
if(len(dataset) > 10):
ncol = np.median(np.array([len(d) for d in dataset]))
- header_index = np.where(np.array([len(d) for d in dataset]) == (ncol-1))[0][-1]
+ header_index = np.where(np.array([len(d) for d in dataset]) == (ncol - 1))[0][-1]
header = dataset[header_index]
# to ensure the header is unique
header = [str(j + 1) + " " + header[j] for j in range(len(header))]
diff --git a/pandas/io/date_converters.py b/pandas/io/date_converters.py
index b9325b97b30ce..ce670eec7032f 100644
--- a/pandas/io/date_converters.py
+++ b/pandas/io/date_converters.py
@@ -2,17 +2,20 @@
import numpy as np
import pandas.lib as lib
+
def parse_date_time(date_col, time_col):
date_col = _maybe_cast(date_col)
time_col = _maybe_cast(time_col)
return lib.try_parse_date_and_time(date_col, time_col)
+
def parse_date_fields(year_col, month_col, day_col):
year_col = _maybe_cast(year_col)
month_col = _maybe_cast(month_col)
day_col = _maybe_cast(day_col)
return lib.try_parse_year_month_day(year_col, month_col, day_col)
+
def parse_all_fields(year_col, month_col, day_col, hour_col, minute_col,
second_col):
year_col = _maybe_cast(year_col)
@@ -24,6 +27,7 @@ def parse_all_fields(year_col, month_col, day_col, hour_col, minute_col,
return lib.try_parse_datetime_components(year_col, month_col, day_col,
hour_col, minute_col, second_col)
+
def generic_parser(parse_func, *cols):
N = _check_columns(cols)
results = np.empty(N, dtype=object)
@@ -34,11 +38,13 @@ def generic_parser(parse_func, *cols):
return results
+
def _maybe_cast(arr):
if not arr.dtype.type == np.object_:
arr = np.array(arr, dtype=object)
return arr
+
def _check_columns(cols):
assert(len(cols) > 0)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index db8c4a132d25b..76002917e8e47 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -26,6 +26,7 @@ def next(x):
from pandas.util.decorators import Appender
+
class DateConversionError(Exception):
pass
@@ -146,11 +147,12 @@ def _is_url(url):
Very naive check to see if url is an http(s), ftp, or file location.
"""
parsed_url = urlparse(url)
- if parsed_url.scheme in ['http','file', 'ftp', 'https']:
+ if parsed_url.scheme in ['http', 'file', 'ftp', 'https']:
return True
else:
return False
+
def _read(cls, filepath_or_buffer, kwds):
"Generic reader of line files."
encoding = kwds.get('encoding', None)
@@ -176,7 +178,7 @@ def _read(cls, filepath_or_buffer, kwds):
try:
# universal newline mode
f = com._get_handle(filepath_or_buffer, 'U', encoding=encoding)
- except Exception: # pragma: no cover
+ except Exception: # pragma: no cover
f = com._get_handle(filepath_or_buffer, 'r', encoding=encoding)
if kwds.get('date_parser', None) is not None:
@@ -199,6 +201,7 @@ def _read(cls, filepath_or_buffer, kwds):
return parser.get_chunk()
+
@Appender(_read_csv_doc)
def read_csv(filepath_or_buffer,
sep=',',
@@ -249,6 +252,7 @@ def read_csv(filepath_or_buffer,
return _read(TextParser, filepath_or_buffer, kdict)
+
@Appender(_read_table_doc)
def read_table(filepath_or_buffer,
sep='\t',
@@ -299,6 +303,7 @@ def read_table(filepath_or_buffer,
return _read(TextParser, filepath_or_buffer, kdict)
+
@Appender(_read_fwf_doc)
def read_fwf(filepath_or_buffer,
colspecs=None,
@@ -353,13 +358,14 @@ def read_fwf(filepath_or_buffer,
if widths is not None:
colspecs, col = [], 0
for w in widths:
- colspecs.append( (col, col+w) )
+ colspecs.append((col, col+w))
col += w
kdict['colspecs'] = colspecs
kdict['thousands'] = thousands
return _read(FixedWidthFieldParser, filepath_or_buffer, kdict)
+
def read_clipboard(**kwargs): # pragma: no cover
"""
Read text from clipboard and pass to read_table. See read_table for the
@@ -373,7 +379,8 @@ def read_clipboard(**kwargs): # pragma: no cover
text = clipboard_get()
return read_table(StringIO(text), **kwargs)
-def to_clipboard(obj): # pragma: no cover
+
+def to_clipboard(obj): # pragma: no cover
"""
Attempt to write text representation of object to the system clipboard
@@ -387,6 +394,7 @@ def to_clipboard(obj): # pragma: no cover
from pandas.util.clipboard import clipboard_set
clipboard_set(str(obj))
+
class BufferedReader(object):
"""
For handling different kinds of files, e.g. zip files where reading out a
@@ -394,7 +402,8 @@ class BufferedReader(object):
"""
def __init__(self, fh, delimiter=','):
- pass # pragma: no coverage
+ pass # pragma: no coverage
+
class BufferedCSVReader(BufferedReader):
pass
@@ -817,7 +826,7 @@ def get_chunk(self, rows=None):
self._first_chunk = False
columns = list(self.orig_columns)
- if len(content) == 0: # pragma: no cover
+ if len(content) == 0: # pragma: no cover
if self.index_col is not None:
if np.isscalar(self.index_col):
index = Index([], name=self.index_name)
@@ -903,7 +912,7 @@ def ix(col):
index = data.pop(i)
if not self._implicit_index:
columns.pop(i)
- else: # given a list of index
+ else: # given a list of index
to_remove = []
index = []
for idx in self.index_col:
@@ -938,7 +947,7 @@ def _get_name(icol):
name = _get_name(self.index_col)
index = data.pop(name)
col_names.remove(name)
- else: # given a list of index
+ else: # given a list of index
to_remove = []
index = []
for idx in self.index_col:
@@ -1085,7 +1094,7 @@ def _get_lines(self, rows=None):
lines.extend(source[self.pos:])
self.pos = len(source)
else:
- lines.extend(source[self.pos:self.pos+rows])
+ lines.extend(source[self.pos:self.pos + rows])
self.pos += rows
else:
new_rows = []
@@ -1121,6 +1130,7 @@ def _get_lines(self, rows=None):
lines = self._check_comments(lines)
return self._check_thousands(lines)
+
def _get_na_values(col, na_values):
if isinstance(na_values, dict):
if col in na_values:
@@ -1130,6 +1140,7 @@ def _get_na_values(col, na_values):
else:
return na_values
+
def _convert_to_ndarrays(dct, na_values, verbose=False):
result = {}
for c, values in dct.iteritems():
@@ -1140,6 +1151,7 @@ def _convert_to_ndarrays(dct, na_values, verbose=False):
print 'Filled %d NA values in column %s' % (na_count, str(c))
return result
+
def _convert_types(values, na_values):
na_count = 0
if issubclass(values.dtype.type, (np.number, np.bool_)):
@@ -1162,6 +1174,7 @@ def _convert_types(values, na_values):
return result, na_count
+
def _try_convert_dates(parser, colspec, data_dict, columns):
colspec = _get_col_names(colspec, columns)
new_name = '_'.join([str(x) for x in colspec])
@@ -1173,6 +1186,7 @@ def _try_convert_dates(parser, colspec, data_dict, columns):
new_col = parser(_concat_date_cols(to_parse))
return new_name, new_col, colspec
+
def _get_col_names(colspec, columns):
colset = set(columns)
colnames = []
@@ -1201,7 +1215,7 @@ class FixedWidthReader(object):
def __init__(self, f, colspecs, filler, thousands=None):
self.f = f
self.colspecs = colspecs
- self.filler = filler # Empty characters between fields.
+ self.filler = filler # Empty characters between fields.
self.thousands = thousands
assert isinstance(colspecs, (tuple, list))
@@ -1323,8 +1337,8 @@ def parse(self, sheetname, header=0, skiprows=None, skip_footer=0,
if skipfooter is not None:
skip_footer = skipfooter
- choose = {True:self._parse_xlsx,
- False:self._parse_xls}
+ choose = {True: self._parse_xlsx,
+ False: self._parse_xls}
return choose[self.use_xlsx](sheetname, header=header,
skiprows=skiprows, index_col=index_col,
parse_cols=parse_cols,
@@ -1399,7 +1413,7 @@ def _parse_xls(self, sheetname, header=0, skiprows=None,
if typ == XL_CELL_DATE:
dt = xldate_as_tuple(value, datemode)
# how to produce this first case?
- if dt[0] < MINYEAR: # pragma: no cover
+ if dt[0] < MINYEAR: # pragma: no cover
value = time(*dt[3:])
else:
value = datetime(*dt)
@@ -1436,6 +1450,7 @@ def _trim_excel_header(row):
row = row[1:]
return row
+
class ExcelWriter(object):
"""
Class for writing DataFrame objects into excel sheets, uses xlwt for xls,
@@ -1456,7 +1471,7 @@ def __init__(self, path):
self.fm_date = xlwt.easyxf(num_format_str='YYYY-MM-DD')
else:
from openpyxl.workbook import Workbook
- self.book = Workbook(optimized_write = True)
+ self.book = Workbook(optimized_write=True)
self.path = path
self.sheets = {}
self.cur_sheet = None
@@ -1498,15 +1513,15 @@ def _writerow_xls(self, row, sheet_name):
for i, val in enumerate(row):
if isinstance(val, (datetime.datetime, datetime.date)):
if isinstance(val, datetime.datetime):
- sheetrow.write(i,val, self.fm_datetime)
+ sheetrow.write(i, val, self.fm_datetime)
else:
- sheetrow.write(i,val, self.fm_date)
+ sheetrow.write(i, val, self.fm_date)
elif isinstance(val, np.int64):
- sheetrow.write(i,int(val))
+ sheetrow.write(i, int(val))
elif isinstance(val, np.bool8):
- sheetrow.write(i,bool(val))
+ sheetrow.write(i, bool(val))
else:
- sheetrow.write(i,val)
+ sheetrow.write(i, val)
row_idx += 1
if row_idx == 1000:
sheet.flush_row_data()
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index c82dab08a3a1c..af480b5a6457f 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -29,44 +29,45 @@
# reading and writing the full object in one go
_TYPE_MAP = {
- Series : 'series',
- SparseSeries : 'sparse_series',
- TimeSeries : 'series',
- DataFrame : 'frame',
- SparseDataFrame : 'sparse_frame',
- Panel : 'wide',
- SparsePanel : 'sparse_panel'
+ Series: 'series',
+ SparseSeries: 'sparse_series',
+ TimeSeries: 'series',
+ DataFrame: 'frame',
+ SparseDataFrame: 'sparse_frame',
+ Panel: 'wide',
+ SparsePanel: 'sparse_panel'
}
_NAME_MAP = {
- 'series' : 'Series',
- 'time_series' : 'TimeSeries',
- 'sparse_series' : 'SparseSeries',
- 'frame' : 'DataFrame',
- 'sparse_frame' : 'SparseDataFrame',
- 'frame_table' : 'DataFrame (Table)',
- 'wide' : 'Panel',
- 'sparse_panel' : 'SparsePanel',
- 'wide_table' : 'Panel (Table)',
- 'long' : 'LongPanel',
+ 'series': 'Series',
+ 'time_series': 'TimeSeries',
+ 'sparse_series': 'SparseSeries',
+ 'frame': 'DataFrame',
+ 'sparse_frame': 'SparseDataFrame',
+ 'frame_table': 'DataFrame (Table)',
+ 'wide': 'Panel',
+ 'sparse_panel': 'SparsePanel',
+ 'wide_table': 'Panel (Table)',
+ 'long': 'LongPanel',
# legacy h5 files
- 'Series' : 'Series',
- 'TimeSeries' : 'TimeSeries',
- 'DataFrame' : 'DataFrame',
- 'DataMatrix' : 'DataMatrix'
+ 'Series': 'Series',
+ 'TimeSeries': 'TimeSeries',
+ 'DataFrame': 'DataFrame',
+ 'DataMatrix': 'DataMatrix'
}
# legacy handlers
_LEGACY_MAP = {
- 'Series' : 'legacy_series',
- 'TimeSeries' : 'legacy_series',
- 'DataFrame' : 'legacy_frame',
- 'DataMatrix' : 'legacy_frame',
- 'WidePanel' : 'wide_table',
+ 'Series': 'legacy_series',
+ 'TimeSeries': 'legacy_series',
+ 'DataFrame': 'legacy_frame',
+ 'DataMatrix': 'legacy_frame',
+ 'WidePanel': 'wide_table',
}
# oh the troubles to reduce import time
_table_mod = None
+
def _tables():
global _table_mod
if _table_mod is None:
@@ -74,6 +75,7 @@ def _tables():
_table_mod = tables
return _table_mod
+
@contextmanager
def get_store(path, mode='a', complevel=None, complib=None,
fletcher32=False):
@@ -120,6 +122,7 @@ def get_store(path, mode='a', complevel=None, complib=None,
if store is not None:
store.close()
+
class HDFStore(object):
"""
dict-like IO interface for storing pandas objects in PyTables
@@ -167,7 +170,7 @@ def __init__(self, path, mode='a', complevel=None, complib=None,
fletcher32=False):
try:
import tables as _
- except ImportError: # pragma: no cover
+ except ImportError: # pragma: no cover
raise Exception('HDFStore requires PyTables')
self.path = path
@@ -226,7 +229,7 @@ def open(self, mode='a', warn=True):
See HDFStore docstring or tables.openFile for info about modes
"""
self.mode = mode
- if warn and mode == 'w': # pragma: no cover
+ if warn and mode == 'w': # pragma: no cover
while True:
response = raw_input("Re-opening as mode='w' will delete the "
"current file. Continue (y/n)?")
@@ -328,8 +331,8 @@ def put(self, key, value, table=False, append=False,
value : {Series, DataFrame, Panel}
table : boolean, default False
Write as a PyTables Table structure which may perform worse but
- allow more flexible operations like searching / selecting subsets of
- the data
+ allow more flexible operations like searching / selecting subsets
+ of the data
append : boolean, default False
For table data structures, append the input data to the existing
table
@@ -342,7 +345,7 @@ def put(self, key, value, table=False, append=False,
comp=compression)
def _get_handler(self, op, kind):
- return getattr(self,'_%s_%s' % (op, kind))
+ return getattr(self, '_%s_%s' % (op, kind))
def remove(self, key, where=None):
"""
@@ -666,7 +669,8 @@ def _read_index_node(self, node):
if 'name' in node._v_attrs:
name = node._v_attrs.name
- index_class = _alias_to_class(getattr(node._v_attrs, 'index_class', ''))
+ index_class = _alias_to_class(getattr(node._v_attrs,
+ 'index_class', ''))
factory = _get_index_factory(index_class)
kwargs = {}
@@ -714,7 +718,6 @@ def _write_array(self, group, key, value):
getattr(group, key)._v_attrs.transposed = transposed
return
-
if value.dtype.type == np.object_:
vlarr = self.handle.createVLArray(group, key,
_tables().ObjectAtom())
@@ -749,12 +752,12 @@ def _write_table(self, group, items=None, index=None, columns=None,
if 'table' not in group:
# create the table
- desc = {'index' : index_t,
- 'column' : col_t,
- 'values' : _tables().FloatCol(shape=(len(values)))}
+ desc = {'index': index_t,
+ 'column': col_t,
+ 'values': _tables().FloatCol(shape=(len(values)))}
- options = {'name' : 'table',
- 'description' : desc}
+ options = {'name': 'table',
+ 'description': desc}
if compression:
complevel = self.complevel
@@ -783,7 +786,7 @@ def _write_table(self, group, items=None, index=None, columns=None,
table._v_attrs.index_kind = index_kind
table._v_attrs.columns_kind = cols_kind
if append:
- existing_fields = getattr(table._v_attrs,'fields',None)
+ existing_fields = getattr(table._v_attrs, 'fields', None)
if (existing_fields is not None and
existing_fields != list(items)):
raise Exception("appended items do not match existing items"
@@ -809,7 +812,7 @@ def _write_table(self, group, items=None, index=None, columns=None,
row['values'] = v
row.append()
self.handle.flush()
- except (ValueError), detail: # pragma: no cover
+ except (ValueError), detail: # pragma: no cover
print "value_error in _write_table -> %s" % str(detail)
try:
self.handle.flush()
@@ -918,6 +921,7 @@ def _read_panel_table(self, group, where=None):
wp = wp.reindex(minor=new_minor)
return wp
+
def _delete_from_table(self, group, where = None):
table = getattr(group, 'table')
@@ -933,6 +937,7 @@ def _delete_from_table(self, group, where = None):
self.handle.flush()
return len(s.values)
+
def _convert_index(index):
if isinstance(index, DatetimeIndex):
converted = index.asi8
@@ -960,7 +965,7 @@ def _convert_index(index):
converted = np.array([time.mktime(v.timetuple()) for v in values],
dtype=np.int32)
return converted, 'date', _tables().Time32Col()
- elif inferred_type =='string':
+ elif inferred_type == 'string':
converted = np.array(list(values), dtype=np.str_)
itemsize = converted.dtype.itemsize
return converted, 'string', _tables().StringCol(itemsize)
@@ -974,10 +979,11 @@ def _convert_index(index):
elif inferred_type == 'floating':
atom = _tables().Float64Col()
return np.asarray(values, dtype=np.float64), 'float', atom
- else: # pragma: no cover
+ else: # pragma: no cover
atom = _tables().ObjectAtom()
return np.asarray(values, dtype='O'), 'object', atom
+
def _read_array(group, key):
import tables
node = getattr(group, key)
@@ -1006,6 +1012,7 @@ def _read_array(group, key):
else:
return ret
+
def _unconvert_index(data, kind):
if kind == 'datetime64':
index = DatetimeIndex(data)
@@ -1018,19 +1025,21 @@ def _unconvert_index(data, kind):
index = np.array(data)
elif kind == 'object':
index = np.array(data[0])
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('unrecognized index type %s' % kind)
return index
+
def _unconvert_index_legacy(data, kind, legacy=False):
if kind == 'datetime':
index = lib.time64_to_datetime(data)
elif kind in ('string', 'integer'):
index = np.array(data, dtype=object)
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('unrecognized index type %s' % kind)
return index
+
def _maybe_convert(values, val_kind):
if _need_convert(val_kind):
conv = _get_converter(val_kind)
@@ -1038,19 +1047,22 @@ def _maybe_convert(values, val_kind):
values = conv(values)
return values
+
def _get_converter(kind):
if kind == 'datetime64':
return lambda x: np.array(x, dtype='M8[ns]')
if kind == 'datetime':
return lib.convert_timestamps
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('invalid kind %s' % kind)
+
def _need_convert(kind):
if kind in ('datetime', 'datetime64'):
return True
return False
+
def _is_table_type(group):
try:
return 'table' in group._v_attrs.pandas_type
@@ -1058,21 +1070,24 @@ def _is_table_type(group):
# new node, e.g.
return False
-_index_type_map = {DatetimeIndex : 'datetime',
- PeriodIndex : 'period'}
+_index_type_map = {DatetimeIndex: 'datetime',
+ PeriodIndex: 'period'}
_reverse_index_map = {}
for k, v in _index_type_map.iteritems():
_reverse_index_map[v] = k
+
def _class_to_alias(cls):
return _index_type_map.get(cls, '')
+
def _alias_to_class(alias):
- if isinstance(alias, type): # pragma: no cover
- return alias # compat: for a short period of time master stored types
+ if isinstance(alias, type): # pragma: no cover
+ return alias # compat: for a short period of time master stored types
return _reverse_index_map.get(alias, Index)
+
class Selection(object):
"""
Carries out a selection operation on a tables.Table object.
@@ -1109,18 +1124,18 @@ def __init__(self, table, where=None, index_kind=None):
def generate(self, where):
# and condictions
for c in where:
- op = c.get('op',None)
+ op = c.get('op', None)
value = c['value']
field = c['field']
if field == 'index' and self.index_kind == 'datetime64':
val = lib.Timestamp(value).value
- self.conditions.append('(%s %s %s)' % (field,op,val))
+ self.conditions.append('(%s %s %s)' % (field, op, val))
elif field == 'index' and isinstance(value, datetime):
value = time.mktime(value.timetuple())
- self.conditions.append('(%s %s %s)' % (field,op,value))
+ self.conditions.append('(%s %s %s)' % (field, op, value))
else:
- self.generate_multiple_conditions(op,value,field)
+ self.generate_multiple_conditions(op, value, field)
if len(self.conditions):
self.the_condition = '(' + ' & '.join(self.conditions) + ')'
@@ -1129,15 +1144,15 @@ def generate_multiple_conditions(self, op, value, field):
if op and op == 'in' or isinstance(value, (list, np.ndarray)):
if len(value) <= 61:
- l = '(' + ' | '.join([ "(%s == '%s')" % (field,v)
- for v in value ]) + ')'
+ l = '(' + ' | '.join([ "(%s == '%s')" % (field, v)
+ for v in value]) + ')'
self.conditions.append(l)
else:
self.column_filter = set(value)
else:
if op is None:
op = '=='
- self.conditions.append('(%s %s "%s")' % (field,op,value))
+ self.conditions.append('(%s %s "%s")' % (field, op, value))
def select(self):
"""
@@ -1155,6 +1170,7 @@ def select_coords(self):
"""
self.values = self.table.getWhereList(self.the_condition)
+
def _get_index_factory(klass):
if klass == DatetimeIndex:
def f(values, freq=None, tz=None):
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 6d1628fd8b21f..021f80c065e75 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -10,9 +10,10 @@
from pandas.core.datetools import format as date_format
from pandas.core.api import DataFrame, isnull
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Helper execution function
+
def execute(sql, con, retry=True, cur=None, params=None):
"""
Execute the given SQL query using the provided connection object.
@@ -44,17 +45,19 @@ def execute(sql, con, retry=True, cur=None, params=None):
print 'Error on sql %s' % sql
raise
+
def _safe_fetch(cur):
try:
result = cur.fetchall()
if not isinstance(result, list):
result = list(result)
return result
- except Exception, e: # pragma: no cover
+ except Exception, e: # pragma: no cover
excName = e.__class__.__name__
if excName == 'OperationalError':
return []
+
def tquery(sql, con=None, cur=None, retry=True):
"""
Returns list of tuples corresponding to each row in given sql
@@ -98,6 +101,7 @@ def tquery(sql, con=None, cur=None, retry=True):
return result
+
def uquery(sql, con=None, cur=None, retry=True, params=()):
"""
Does the same thing as tquery, but instead of returning results, it
@@ -119,6 +123,7 @@ def uquery(sql, con=None, cur=None, retry=True, params=()):
return uquery(sql, con, retry=False)
return result
+
def read_frame(sql, con, index_col=None, coerce_float=True):
"""
Returns a DataFrame corresponding to the result set of the query
@@ -152,6 +157,7 @@ def read_frame(sql, con, index_col=None, coerce_float=True):
frame_query = read_frame
+
def write_frame(frame, name=None, con=None, flavor='sqlite', append=False):
"""
Write records stored in a DataFrame to SQLite. The index will currently be
@@ -170,11 +176,13 @@ def write_frame(frame, name=None, con=None, flavor='sqlite', append=False):
data = [tuple(x) for x in frame.values]
con.executemany(insert_sql, data)
+
def has_table(name, con):
sqlstr = "SELECT name FROM sqlite_master WHERE type='table' AND name='%s'" % name
rs = tquery(sqlstr, con)
return len(rs) > 0
+
def get_sqlite_schema(frame, name, dtypes=None, keys=None):
template = """
CREATE TABLE %(name)s (
@@ -206,26 +214,25 @@ def get_sqlite_schema(frame, name, dtypes=None, keys=None):
if isinstance(keys, basestring):
keys = (keys,)
keystr = ', PRIMARY KEY (%s)' % ','.join(keys)
- return template % {'name' : name, 'columns' : columns, 'keystr' : keystr}
-
+ return template % {'name': name, 'columns': columns, 'keystr': keystr}
-
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Query formatting
_formatters = {
- datetime : lambda dt: "'%s'" % date_format(dt),
- str : lambda x: "'%s'" % x,
- np.str_ : lambda x: "'%s'" % x,
- unicode : lambda x: "'%s'" % x,
- float : lambda x: "%.8f" % x,
- int : lambda x: "%s" % x,
- type(None) : lambda x: "NULL",
- np.float64 : lambda x: "%.10f" % x,
- bool : lambda x: "'%s'" % x,
+ datetime: lambda dt: "'%s'" % date_format(dt),
+ str: lambda x: "'%s'" % x,
+ np.str_: lambda x: "'%s'" % x,
+ unicode: lambda x: "'%s'" % x,
+ float: lambda x: "%.8f" % x,
+ int: lambda x: "%s" % x,
+ type(None): lambda x: "NULL",
+ np.float64: lambda x: "%.10f" % x,
+ bool: lambda x: "'%s'" % x,
}
+
def format_query(sql, *args):
"""
diff --git a/pandas/rpy/base.py b/pandas/rpy/base.py
index 070d457edd21d..0c80448684697 100644
--- a/pandas/rpy/base.py
+++ b/pandas/rpy/base.py
@@ -1,5 +1,6 @@
import pandas.rpy.util as util
+
class lm(object):
"""
Examples
@@ -10,4 +11,3 @@ class lm(object):
def __init__(self, formula, data):
pass
-
diff --git a/pandas/rpy/common.py b/pandas/rpy/common.py
index f81ec7ef369ae..481714b94386c 100644
--- a/pandas/rpy/common.py
+++ b/pandas/rpy/common.py
@@ -16,6 +16,7 @@
__all__ = ['convert_robj', 'load_data', 'convert_to_r_dataframe',
'convert_to_r_matrix']
+
def load_data(name, package=None, convert=True):
if package:
importr(package)
@@ -29,15 +30,18 @@ def load_data(name, package=None, convert=True):
else:
return robj
+
def _rclass(obj):
"""
Return R class name for input object
"""
return r['class'](obj)[0]
+
def _is_null(obj):
return _rclass(obj) == 'NULL'
+
def _convert_list(obj):
"""
Convert named Vector to dict
@@ -45,6 +49,7 @@ def _convert_list(obj):
values = [convert_robj(x) for x in obj]
return dict(zip(obj.names, values))
+
def _convert_array(obj):
"""
Convert Array to ndarray
@@ -59,7 +64,6 @@ def _convert_array(obj):
if len(dim) == 3:
arr = values.reshape(dim[-1:] + dim[:-1]).swapaxes(1, 2)
-
if obj.names is not None:
name_list = [list(x) for x in obj.names]
if len(dim) == 2:
@@ -73,6 +77,7 @@ def _convert_array(obj):
else:
return arr
+
def _convert_vector(obj):
if isinstance(obj, robj.IntVector):
return _convert_int_vector(obj)
@@ -83,6 +88,7 @@ def _convert_vector(obj):
NA_INTEGER = -2147483648
+
def _convert_int_vector(obj):
arr = np.asarray(obj)
mask = arr == NA_INTEGER
@@ -91,6 +97,7 @@ def _convert_int_vector(obj):
arr[mask] = np.nan
return arr
+
def _convert_str_vector(obj):
arr = np.asarray(obj, dtype=object)
mask = arr == robj.NA_Character
@@ -98,6 +105,7 @@ def _convert_str_vector(obj):
arr[mask] = np.nan
return arr
+
def _convert_DataFrame(rdf):
columns = list(rdf.colnames)
rows = np.array(rdf.rownames)
@@ -125,6 +133,7 @@ def _convert_DataFrame(rdf):
return pd.DataFrame(data, index=_check_int(rows), columns=columns)
+
def _convert_Matrix(mat):
columns = mat.colnames
rows = mat.rownames
@@ -135,6 +144,7 @@ def _convert_Matrix(mat):
return pd.DataFrame(np.array(mat), index=_check_int(index),
columns=columns)
+
def _check_int(vec):
try:
# R observation numbers come through as strings
@@ -145,8 +155,8 @@ def _check_int(vec):
return vec
_pandas_converters = [
- (robj.DataFrame , _convert_DataFrame),
- (robj.Matrix , _convert_Matrix),
+ (robj.DataFrame, _convert_DataFrame),
+ (robj.Matrix, _convert_Matrix),
(robj.StrVector, _convert_vector),
(robj.FloatVector, _convert_vector),
(robj.Array, _convert_array),
@@ -154,8 +164,8 @@ def _check_int(vec):
]
_converters = [
- (robj.DataFrame , lambda x: _convert_DataFrame(x).toRecords(index=False)),
- (robj.Matrix , lambda x: _convert_Matrix(x).toRecords(index=False)),
+ (robj.DataFrame, lambda x: _convert_DataFrame(x).toRecords(index=False)),
+ (robj.Matrix, lambda x: _convert_Matrix(x).toRecords(index=False)),
(robj.IntVector, _convert_vector),
(robj.StrVector, _convert_vector),
(robj.FloatVector, _convert_vector),
@@ -163,6 +173,7 @@ def _check_int(vec):
(robj.Vector, _convert_list),
]
+
def convert_robj(obj, use_pandas=True):
"""
Convert rpy2 object to a pandas-friendly form
@@ -206,6 +217,7 @@ def convert_robj(obj, use_pandas=True):
np.str: robj.NA_Character,
np.bool: robj.NA_Logical}
+
def convert_to_r_dataframe(df, strings_as_factors=False):
"""
Convert a pandas DataFrame to a R data.frame.
@@ -270,7 +282,6 @@ def convert_to_r_matrix(df, strings_as_factors=False):
raise TypeError("Conversion to matrix only possible with non-mixed "
"type DataFrames")
-
r_dataframe = convert_to_r_dataframe(df, strings_as_factors)
as_matrix = robj.baseenv.get("as.matrix")
r_matrix = as_matrix(r_dataframe)
@@ -282,18 +293,20 @@ def test_convert_list():
obj = r('list(a=1, b=2, c=3)')
converted = convert_robj(obj)
- expected = {'a' : [1], 'b' : [2], 'c' : [3]}
+ expected = {'a': [1], 'b': [2], 'c': [3]}
_test.assert_dict_equal(converted, expected)
+
def test_convert_nested_list():
obj = r('list(a=list(foo=1, bar=2))')
converted = convert_robj(obj)
- expected = {'a' : {'foo' : [1], 'bar' : [2]}}
+ expected = {'a': {'foo': [1], 'bar': [2]}}
_test.assert_dict_equal(converted, expected)
+
def test_convert_frame():
# built-in dataset
df = r['faithful']
@@ -303,6 +316,7 @@ def test_convert_frame():
assert np.array_equal(converted.columns, ['eruptions', 'waiting'])
assert np.array_equal(converted.index, np.arange(1, 273))
+
def _test_matrix():
r('mat <- matrix(rnorm(9), ncol=3)')
r('colnames(mat) <- c("one", "two", "three")')
@@ -310,6 +324,7 @@ def _test_matrix():
return r['mat']
+
def test_convert_matrix():
mat = _test_matrix()
@@ -318,6 +333,7 @@ def test_convert_matrix():
assert np.array_equal(converted.index, ['a', 'b', 'c'])
assert np.array_equal(converted.columns, ['one', 'two', 'three'])
+
def test_convert_r_dataframe():
is_na = robj.baseenv.get("is.na")
@@ -350,6 +366,7 @@ def test_convert_r_dataframe():
else:
assert original == converted
+
def test_convert_r_matrix():
is_na = robj.baseenv.get("is.na")
diff --git a/pandas/rpy/mass.py b/pandas/rpy/mass.py
index 1a663e5729b5f..12fbbdfa4dc98 100644
--- a/pandas/rpy/mass.py
+++ b/pandas/rpy/mass.py
@@ -1,4 +1,2 @@
-
class rlm(object):
pass
-
diff --git a/pandas/rpy/vars.py b/pandas/rpy/vars.py
index 3993423b338ee..4756b2779224c 100644
--- a/pandas/rpy/vars.py
+++ b/pandas/rpy/vars.py
@@ -1,5 +1,6 @@
import pandas.rpy.util as util
+
class VAR(object):
"""
@@ -17,4 +18,3 @@ class VAR(object):
def __init__(y, p=1, type="none", season=None, exogen=None,
lag_max=None, ic=None):
pass
-
diff --git a/pandas/sparse/array.py b/pandas/sparse/array.py
index f0f02d317f26a..c7e783dee910d 100644
--- a/pandas/sparse/array.py
+++ b/pandas/sparse/array.py
@@ -35,12 +35,13 @@ def wrapper(self, other):
return SparseArray(op(self.sp_values, other),
sparse_index=self.sp_index,
fill_value=new_fill_value)
- else: # pragma: no cover
+ else: # pragma: no cover
raise TypeError('operation with %s not supported' % type(other))
wrapper.__name__ = name
return wrapper
+
def _sparse_array_op(left, right, op, name):
if np.isnan(left.fill_value):
sparse_op = lambda a, b: _sparse_nanop(a, b, name)
@@ -61,6 +62,7 @@ def _sparse_array_op(left, right, op, name):
return SparseArray(result, sparse_index=result_index,
fill_value=fill_value)
+
def _sparse_nanop(this, other, name):
sparse_op = getattr(splib, 'sparse_nan%s' % name)
result, result_index = sparse_op(this.sp_values,
@@ -70,6 +72,7 @@ def _sparse_nanop(this, other, name):
return result, result_index
+
def _sparse_fillop(this, other, name):
sparse_op = getattr(splib, 'sparse_%s' % name)
result, result_index = sparse_op(this.sp_values,
@@ -399,6 +402,7 @@ def mean(self, axis=None, dtype=None, out=None):
nsparse = self.sp_index.ngaps
return (sp_sum + self.fill_value * nsparse) / (ct + nsparse)
+
def make_sparse(arr, kind='block', fill_value=nan):
"""
Convert ndarray to sparse format
@@ -428,7 +432,7 @@ def make_sparse(arr, kind='block', fill_value=nan):
index = BlockIndex(length, locs, lens)
elif kind == 'integer':
index = IntIndex(length, indices)
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError('must be block or integer type')
sparsified_values = arr[mask]
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index c26a37852ea42..0df726fcb40fd 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -1,6 +1,6 @@
"""
-Data structures for sparse float data. Life is made simpler by dealing only with
-float64 data
+Data structures for sparse float data. Life is made simpler by dealing only
+with float64 data
"""
# pylint: disable=E1101,E1103,W0231,E0202
@@ -42,6 +42,7 @@ def shape(self):
def axes(self):
return [self.sp_frame.columns, self.sp_frame.index]
+
class SparseDataFrame(DataFrame):
"""
DataFrame containing sparse floating point data in the form of SparseSeries
@@ -291,10 +292,11 @@ def _delete_column_index(self, loc):
new_columns = self.columns[:loc]
else:
new_columns = Index(np.concatenate((self.columns[:loc],
- self.columns[loc+1:])))
+ self.columns[loc + 1:])))
self.columns = new_columns
_index = None
+
def _set_index(self, index):
self._index = _ensure_index(index)
for v in self._series.values():
@@ -337,7 +339,7 @@ def __getitem__(self, key):
if com._is_bool_indexer(key):
key = np.asarray(key, dtype=bool)
return self._getitem_array(key)
- else: # pragma: no cover
+ else: # pragma: no cover
raise
@Appender(DataFrame.get_value.__doc__, indents=0)
@@ -575,7 +577,7 @@ def _rename_columns_inplace(self, mapper):
for col in self.columns:
new_col = mapper(col)
- if new_col in new_series: # pragma: no cover
+ if new_col in new_series: # pragma: no cover
raise Exception('Non-unique mapping!')
new_series[new_col] = self[col]
new_columns.append(new_col)
@@ -626,7 +628,7 @@ def _join_compat(self, other, on=None, how='left', lsuffix='', rsuffix='',
def _join_index(self, other, how, lsuffix, rsuffix):
if isinstance(other, Series):
assert(other.name is not None)
- other = SparseDataFrame({other.name : other},
+ other = SparseDataFrame({other.name: other},
default_fill_value=self.default_fill_value)
join_index = self.index.join(other.index, how=how)
@@ -786,6 +788,7 @@ def fillna(self, value=None, method='pad', inplace=False, limit=None):
return self._constructor(new_series, index=self.index,
columns=self.columns)
+
def stack_sparse_frame(frame):
"""
Only makes sense when fill_value is NaN
diff --git a/pandas/sparse/list.py b/pandas/sparse/list.py
index 62c9d096d8dfa..9f59b9108a6b0 100644
--- a/pandas/sparse/list.py
+++ b/pandas/sparse/list.py
@@ -3,6 +3,7 @@
from pandas.sparse.array import SparseArray
import pandas._sparse as splib
+
class SparseList(object):
"""
Data structure for accumulating data to be converted into a
diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py
index b843b653ab439..bd5a2785aba2b 100644
--- a/pandas/sparse/panel.py
+++ b/pandas/sparse/panel.py
@@ -1,6 +1,6 @@
"""
-Data structures for sparse float data. Life is made simpler by dealing only with
-float64 data
+Data structures for sparse float data. Life is made simpler by dealing only
+with float64 data
"""
# pylint: disable=E1101,E1103,W0231
@@ -15,6 +15,7 @@
import pandas.core.common as com
+
class SparsePanelAxis(object):
def __init__(self, cache_field, frame_attr):
@@ -97,7 +98,7 @@ def __init__(self, frames, items=None, major_axis=None, minor_axis=None,
self.major_axis = major_axis
self.minor_axis = minor_axis
- def _consolidate_inplace(self): # pragma: no cover
+ def _consolidate_inplace(self): # pragma: no cover
# do nothing when DataFrame calls this method
pass
@@ -135,6 +136,7 @@ def values(self):
# need a special property for items to make the field assignable
_items = None
+
def _get_items(self):
return self._items
@@ -262,7 +264,7 @@ def to_frame(self, filter_observations=True):
# values are stacked column-major
indexer = minor * N + major
- counts.put(indexer, counts.take(indexer) + 1) # cuteness
+ counts.put(indexer, counts.take(indexer) + 1) # cuteness
d_values[item] = values
d_indexer[item] = indexer
@@ -445,6 +447,7 @@ def minor_xs(self, key):
SparseWidePanel = SparsePanel
+
def _convert_frames(frames, index, columns, fill_value=np.nan, kind='block'):
from pandas.core.panel import _get_combined_index
output = {}
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index dfe78a81c6a59..70d35607573c2 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -1,6 +1,6 @@
"""
-Data structures for sparse float data. Life is made simpler by dealing only with
-float64 data
+Data structures for sparse float data. Life is made simpler by dealing only
+with float64 data
"""
# pylint: disable=E1101,E1103,W0231
@@ -25,9 +25,10 @@
from pandas.util.decorators import Appender
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Wrapper function for Series arithmetic methods
+
def _sparse_op_wrap(op, name):
"""
Wrapper function for Series arithmetic operations, to avoid
@@ -49,12 +50,13 @@ def wrapper(self, other):
sparse_index=self.sp_index,
fill_value=new_fill_value,
name=self.name)
- else: # pragma: no cover
+ else: # pragma: no cover
raise TypeError('operation with %s not supported' % type(other))
wrapper.__name__ = name
return wrapper
+
def _sparse_series_op(left, right, op, name):
left, right = left.align(right, join='outer', copy=False)
new_index = left.index
@@ -67,6 +69,7 @@ def _sparse_series_op(left, right, op, name):
return result
+
class SparseSeries(SparseArray, Series):
__array_priority__ = 15
@@ -98,7 +101,7 @@ def __new__(cls, data, index=None, sparse_index=None, kind='block',
data = Series(data)
values, sparse_index = make_sparse(data, kind=kind,
fill_value=fill_value)
- elif np.isscalar(data): # pragma: no cover
+ elif np.isscalar(data): # pragma: no cover
if index is None:
raise Exception('must pass index!')
@@ -200,7 +203,6 @@ def __setstate__(self, state):
nd_state, own_state = state
ndarray.__setstate__(self, nd_state)
-
index, fill_value, sp_index = own_state[:3]
name = None
if len(own_state) > 3:
@@ -540,5 +542,6 @@ def combine_first(self, other):
dense_combined = self.to_dense().combine_first(other)
return dense_combined.to_sparse(fill_value=self.fill_value)
+
class SparseTimeSeries(SparseSeries, TimeSeries):
pass
diff --git a/pandas/stats/common.py b/pandas/stats/common.py
index 492a7a7673397..c3034dbc390bf 100644
--- a/pandas/stats/common.py
+++ b/pandas/stats/common.py
@@ -13,14 +13,14 @@ def _get_cluster_type(cluster_type):
raise Exception('Unrecognized cluster type: %s' % cluster_type)
_CLUSTER_TYPES = {
- 0 : 'time',
- 1 : 'entity'
+ 0: 'time',
+ 1: 'entity'
}
_WINDOW_TYPES = {
- 0 : 'full_sample',
- 1 : 'rolling',
- 2 : 'expanding'
+ 0: 'full_sample',
+ 1: 'rolling',
+ 2: 'expanding'
}
@@ -37,6 +37,7 @@ def _get_window_type(window_type):
else: # pragma: no cover
raise Exception('Unrecognized window type: %s' % window_type)
+
def banner(text, width=80):
"""
diff --git a/pandas/stats/fama_macbeth.py b/pandas/stats/fama_macbeth.py
index 586642f813a91..2c8a3a65bd5ac 100644
--- a/pandas/stats/fama_macbeth.py
+++ b/pandas/stats/fama_macbeth.py
@@ -6,6 +6,7 @@
import pandas.stats.common as common
from pandas.util.decorators import cache_readonly
+
def fama_macbeth(**kwargs):
"""Runs Fama-MacBeth regression.
@@ -24,6 +25,7 @@ def fama_macbeth(**kwargs):
return klass(**kwargs)
+
class FamaMacBeth(object):
def __init__(self, y, x, intercept=True, nw_lags=None,
nw_lags_beta=None,
@@ -79,16 +81,16 @@ def t_stat(self):
@cache_readonly
def _results(self):
return {
- 'mean_beta' : self._mean_beta_raw,
- 'std_beta' : self._std_beta_raw,
- 't_stat' : self._t_stat_raw,
+ 'mean_beta': self._mean_beta_raw,
+ 'std_beta': self._std_beta_raw,
+ 't_stat': self._t_stat_raw,
}
@cache_readonly
def _coef_table(self):
buffer = StringIO()
buffer.write('%13s %13s %13s %13s %13s %13s\n' %
- ('Variable','Beta', 'Std Err','t-stat','CI 2.5%','CI 97.5%'))
+ ('Variable', 'Beta', 'Std Err', 't-stat', 'CI 2.5%', 'CI 97.5%'))
template = '%13s %13.4f %13.4f %13.2f %13.4f %13.4f\n'
for i, name in enumerate(self._cols):
@@ -128,13 +130,14 @@ def summary(self):
--------------------------------End of Summary---------------------------------
"""
params = {
- 'formulaRHS' : ' + '.join(self._cols),
- 'nu' : len(self._beta_raw),
- 'coefTable' : self._coef_table,
+ 'formulaRHS': ' + '.join(self._cols),
+ 'nu': len(self._beta_raw),
+ 'coefTable': self._coef_table,
}
return template % params
+
class MovingFamaMacBeth(FamaMacBeth):
def __init__(self, y, x, window_type='rolling', window=10,
intercept=True, nw_lags=None, nw_lags_beta=None,
@@ -197,11 +200,12 @@ def _result_index(self):
@cache_readonly
def _results(self):
return {
- 'mean_beta' : self._mean_beta_raw[-1],
- 'std_beta' : self._std_beta_raw[-1],
- 't_stat' : self._t_stat_raw[-1],
+ 'mean_beta': self._mean_beta_raw[-1],
+ 'std_beta': self._std_beta_raw[-1],
+ 't_stat': self._t_stat_raw[-1],
}
+
def _calc_t_stat(beta, nw_lags_beta):
N = len(beta)
B = beta - beta.mean(0)
diff --git a/pandas/stats/interface.py b/pandas/stats/interface.py
index 603d3b8289226..ff87aa1c9af26 100644
--- a/pandas/stats/interface.py
+++ b/pandas/stats/interface.py
@@ -3,6 +3,7 @@
from pandas.stats.plm import PanelOLS, MovingPanelOLS, NonPooledPanelOLS
import pandas.stats.common as common
+
def ols(**kwargs):
"""Returns the appropriate OLS object depending on whether you need
simple or panel OLS, and a full-sample or rolling/expanding OLS.
diff --git a/pandas/stats/math.py b/pandas/stats/math.py
index c048435493c13..1b926fa5ee7c0 100644
--- a/pandas/stats/math.py
+++ b/pandas/stats/math.py
@@ -6,6 +6,7 @@
import numpy as np
import numpy.linalg as linalg
+
def rank(X, cond=1.0e-12):
"""
Return the rank of a matrix X based on its generalized inverse,
@@ -20,6 +21,7 @@ def rank(X, cond=1.0e-12):
else:
return int(not np.alltrue(np.equal(X, 0.)))
+
def solve(a, b):
"""Returns the solution of A X = B."""
try:
@@ -27,6 +29,7 @@ def solve(a, b):
except linalg.LinAlgError:
return np.dot(linalg.pinv(a), b)
+
def inv(a):
"""Returns the inverse of A."""
try:
@@ -34,10 +37,12 @@ def inv(a):
except linalg.LinAlgError:
return np.linalg.pinv(a)
+
def is_psd(m):
eigvals = linalg.eigvals(m)
return np.isreal(eigvals).all() and (eigvals >= 0).all()
+
def newey_west(m, max_lags, nobs, df, nw_overlap=False):
"""
Compute Newey-West adjusted covariance matrix, taking into account
@@ -84,6 +89,7 @@ def newey_west(m, max_lags, nobs, df, nw_overlap=False):
return Xeps
+
def calc_F(R, r, beta, var_beta, nobs, df):
"""
Computes the standard F-test statistic for linear restriction
@@ -120,4 +126,3 @@ def calc_F(R, r, beta, var_beta, nobs, df):
p_value = 1 - f.cdf(F, q, nobs - df)
return F, (q, nobs - df), p_value
-
diff --git a/pandas/stats/misc.py b/pandas/stats/misc.py
index 7fc2892ad3c2f..e81319cb79c94 100644
--- a/pandas/stats/misc.py
+++ b/pandas/stats/misc.py
@@ -6,7 +6,7 @@
def zscore(series):
- return (series - series.mean()) / np.std(series, ddof = 0)
+ return (series - series.mean()) / np.std(series, ddof=0)
def correl_ts(frame1, frame2):
@@ -36,6 +36,7 @@ def correl_ts(frame1, frame2):
return Series(results)
+
def correl_xs(frame1, frame2):
return correl_ts(frame1.T, frame2.T)
@@ -124,6 +125,7 @@ def bucket(series, k, by=None):
return DataFrame(mat, index=series.index, columns=np.arange(k) + 1)
+
def _split_quantile(arr, k):
arr = np.asarray(arr)
mask = np.isfinite(arr)
@@ -132,6 +134,7 @@ def _split_quantile(arr, k):
return np.array_split(np.arange(n)[mask].take(order), k)
+
def bucketcat(series, cats):
"""
Produce DataFrame representing quantiles of a Series
@@ -162,6 +165,7 @@ def bucketcat(series, cats):
return DataFrame(data, columns=unique_labels)
+
def bucketpanel(series, bins=None, by=None, cat=None):
"""
Bucket data by two Series to create summary panel
@@ -198,7 +202,9 @@ def bucketpanel(series, bins=None, by=None, cat=None):
xcat, ycat = cat
return _bucketpanel_cat(series, xcat, ycat)
else:
- raise Exception('must specify either values or categories to bucket by')
+ raise Exception('must specify either values or categories '
+ 'to bucket by')
+
def _bucketpanel_by(series, xby, yby, xbins, ybins):
xby = xby.reindex(series.index)
@@ -229,6 +235,7 @@ def relabel(key):
return bucketed.rename(columns=relabel)
+
def _bucketpanel_cat(series, xcat, ycat):
xlabels, xmapping = _intern(xcat)
ylabels, ymapping = _intern(ycat)
@@ -256,6 +263,7 @@ def _bucketpanel_cat(series, xcat, ycat):
return result
+
def _intern(values):
# assumed no NaN values
values = np.asarray(values)
@@ -273,6 +281,7 @@ def _uniquify(xlabels, ylabels, xbins, ybins):
return _xpiece + _ypiece
+
def _bucket_labels(series, k):
arr = np.asarray(series)
mask = np.isfinite(arr)
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index cfdc4aa8a23ab..b805a9dca128c 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -26,7 +26,7 @@
'expanding_skew', 'expanding_kurt', 'expanding_quantile',
'expanding_median', 'expanding_apply', 'expanding_corr_pairwise']
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Docs
_doc_template = """
@@ -73,8 +73,8 @@
Either center of mass or span must be specified
EWMA is sometimes specified using a "span" parameter s, we have have that the
-decay parameter \alpha is related to the span as :math:`\alpha = 1 - 2 / (s + 1)
-= c / (1 + c)`
+decay parameter \alpha is related to the span as
+:math:`\alpha = 1 - 2 / (s + 1) = c / (1 + c)`
where c is the center of mass. Given a span, the associated center of mass is
:math:`c = (s - 1) / 2`
@@ -122,6 +122,8 @@
_bias_doc = r"""bias : boolean, default False
Use a standard estimation bias correction
"""
+
+
def rolling_count(arg, window, freq=None, time_rule=None):
"""
Rolling count of number of non-NaN observations inside provided window.
@@ -151,6 +153,7 @@ def rolling_count(arg, window, freq=None, time_rule=None):
return return_hook(result)
+
@Substitution("Unbiased moving covariance", _binary_arg_flex, _flex_retval)
@Appender(_doc_template)
def rolling_cov(arg1, arg2, window, min_periods=None, time_rule=None):
@@ -161,16 +164,18 @@ def _get_cov(X, Y):
return (mean(X * Y) - mean(X) * mean(Y)) * bias_adj
return _flex_binary_moment(arg1, arg2, _get_cov)
+
@Substitution("Moving sample correlation", _binary_arg_flex, _flex_retval)
@Appender(_doc_template)
def rolling_corr(arg1, arg2, window, min_periods=None, time_rule=None):
def _get_corr(a, b):
num = rolling_cov(a, b, window, min_periods, time_rule)
- den = (rolling_std(a, window, min_periods, time_rule) *
+ den = (rolling_std(a, window, min_periods, time_rule) *
rolling_std(b, window, min_periods, time_rule))
return num / den
return _flex_binary_moment(arg1, arg2, _get_corr)
+
def _flex_binary_moment(arg1, arg2, f):
if isinstance(arg1, np.ndarray) and isinstance(arg2, np.ndarray):
X, Y = _prep_binary(arg1, arg2)
@@ -197,6 +202,7 @@ def _flex_binary_moment(arg1, arg2, f):
else:
return _flex_binary_moment(arg2, arg1, f)
+
def rolling_corr_pairwise(df, window, min_periods=None):
"""
Computes pairwise rolling correlation matrices as Panel whose items are
@@ -226,6 +232,7 @@ def rolling_corr_pairwise(df, window, min_periods=None):
return Panel.from_dict(all_results).swapaxes('items', 'major')
+
def _rolling_moment(arg, window, func, minp, axis=0, freq=None,
time_rule=None, **kwargs):
"""
@@ -255,6 +262,7 @@ def _rolling_moment(arg, window, func, minp, axis=0, freq=None,
return return_hook(result)
+
def _process_data_structure(arg, kill_inf=True):
if isinstance(arg, DataFrame):
return_hook = lambda v: type(arg)(v, index=arg.index,
@@ -276,9 +284,10 @@ def _process_data_structure(arg, kill_inf=True):
return return_hook, values
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Exponential moving moments
+
def _get_center_of_mass(com, span):
if span is not None:
if com is not None:
@@ -292,6 +301,7 @@ def _get_center_of_mass(com, span):
return float(com)
+
@Substitution("Exponentially-weighted moving average", _unary_arg, "")
@Appender(_ewm_doc)
def ewma(arg, com=None, span=None, min_periods=0, freq=None, time_rule=None,
@@ -309,10 +319,12 @@ def _ewma(v):
output = np.apply_along_axis(_ewma, 0, values)
return return_hook(output)
+
def _first_valid_index(arr):
# argmax scans from left
return notnull(arr).argmax() if len(arr) else 0
+
@Substitution("Exponentially-weighted moving variance", _unary_arg, _bias_doc)
@Appender(_ewm_doc)
def ewmvar(arg, com=None, span=None, min_periods=0, bias=False,
@@ -328,6 +340,7 @@ def ewmvar(arg, com=None, span=None, min_periods=0, bias=False,
return result
+
@Substitution("Exponentially-weighted moving std", _unary_arg, _bias_doc)
@Appender(_ewm_doc)
def ewmstd(arg, com=None, span=None, min_periods=0, bias=False,
@@ -338,6 +351,7 @@ def ewmstd(arg, com=None, span=None, min_periods=0, bias=False,
ewmvol = ewmstd
+
@Substitution("Exponentially-weighted moving covariance", _binary_arg, "")
@Appender(_ewm_doc)
def ewmcov(arg1, arg2, com=None, span=None, min_periods=0, bias=False,
@@ -349,13 +363,14 @@ def ewmcov(arg1, arg2, com=None, span=None, min_periods=0, bias=False,
mean = lambda x: ewma(x, com=com, span=span, min_periods=min_periods)
- result = (mean(X*Y) - mean(X) * mean(Y))
+ result = (mean(X * Y) - mean(X) * mean(Y))
com = _get_center_of_mass(com, span)
if not bias:
result *= (1.0 + 2.0 * com) / (2.0 * com)
return result
+
@Substitution("Exponentially-weighted moving " "correlation", _binary_arg, "")
@Appender(_ewm_doc)
def ewmcorr(arg1, arg2, com=None, span=None, min_periods=0,
@@ -368,7 +383,7 @@ def ewmcorr(arg1, arg2, com=None, span=None, min_periods=0,
mean = lambda x: ewma(x, com=com, span=span, min_periods=min_periods)
var = lambda x: ewmvar(x, com=com, span=span, min_periods=min_periods,
bias=True)
- return (mean(X*Y) - mean(X)*mean(Y)) / _zsqrt(var(X) * var(Y))
+ return (mean(X * Y) - mean(X) * mean(Y)) / _zsqrt(var(X) * var(Y))
def _zsqrt(x):
@@ -384,6 +399,7 @@ def _zsqrt(x):
return result
+
def _prep_binary(arg1, arg2):
if not isinstance(arg2, type(arg1)):
raise Exception('Input arrays must be of the same type!')
@@ -397,6 +413,7 @@ def _prep_binary(arg1, arg2):
#----------------------------------------------------------------------
# Python interface to Cython functions
+
def _conv_timerule(arg, freq, time_rule):
if time_rule is not None:
import warnings
@@ -412,6 +429,7 @@ def _conv_timerule(arg, freq, time_rule):
return arg
+
def _require_min_periods(p):
def _check_func(minp, window):
if minp is None:
@@ -420,12 +438,14 @@ def _check_func(minp, window):
return max(p, minp)
return _check_func
+
def _use_window(minp, window):
if minp is None:
return window
else:
return minp
+
def _rolling_func(func, desc, check_minp=_use_window):
@Substitution(desc, _unary_arg, _type_of_input)
@Appender(_doc_template)
@@ -455,6 +475,7 @@ def call_cython(arg, window, minp, **kwds):
rolling_kurt = _rolling_func(lib.roll_kurt, 'Unbiased moving kurtosis',
check_minp=_require_min_periods(4))
+
def rolling_quantile(arg, window, quantile, min_periods=None, freq=None,
time_rule=None):
"""Moving quantile
@@ -480,6 +501,7 @@ def call_cython(arg, window, minp):
return _rolling_moment(arg, window, call_cython, min_periods,
freq=freq, time_rule=time_rule)
+
def rolling_apply(arg, window, func, min_periods=None, freq=None,
time_rule=None):
"""Generic moving function application
diff --git a/pandas/stats/ols.py b/pandas/stats/ols.py
index 0192dced6371a..d19898990022d 100644
--- a/pandas/stats/ols.py
+++ b/pandas/stats/ols.py
@@ -21,6 +21,7 @@
_FP_ERR = 1e-8
+
class OLS(object):
"""
Runs a full sample ordinary least squares regression.
@@ -221,7 +222,7 @@ def f_test(self, hypothesis):
eqs = hypothesis.split(',')
elif isinstance(hypothesis, list):
eqs = hypothesis
- else: # pragma: no cover
+ else: # pragma: no cover
raise Exception('hypothesis must be either string or list')
for equation in eqs:
row = np.zeros(len(x_names))
@@ -438,7 +439,7 @@ def predict(self, beta=None, x=None, fill_value=None,
else:
x = x.fillna(value=fill_value, method=fill_method, axis=axis)
if isinstance(x, Series):
- x = DataFrame({'x' : x})
+ x = DataFrame({'x': x})
if self._intercept:
x['intercept'] = 1.
@@ -500,10 +501,10 @@ def summary_as_matrix(self):
"""Returns the formatted results of the OLS as a DataFrame."""
results = self._results
beta = results['beta']
- data = {'beta' : results['beta'],
- 't-stat' : results['t_stat'],
- 'p-value' : results['p_value'],
- 'std err' : results['std_err']}
+ data = {'beta': results['beta'],
+ 't-stat': results['t_stat'],
+ 'p-value': results['p_value'],
+ 'std err': results['std_err']}
return DataFrame(data, beta.index).T
@cache_readonly
@@ -538,7 +539,7 @@ def summary(self):
f_stat = results['f_stat']
- bracketed = ['<%s>' %str(c) for c in results['beta'].index]
+ bracketed = ['<%s>' % str(c) for c in results['beta'].index]
formula = StringIO()
formula.write(bracketed[0])
@@ -554,21 +555,21 @@ def summary(self):
formula.write(' + ' + coef)
params = {
- 'bannerTop' : scom.banner('Summary of Regression Analysis'),
- 'bannerCoef' : scom.banner('Summary of Estimated Coefficients'),
- 'bannerEnd' : scom.banner('End of Summary'),
- 'formula' : formula.getvalue(),
- 'r2' : results['r2'],
- 'r2_adj' : results['r2_adj'],
- 'nobs' : results['nobs'],
- 'df' : results['df'],
- 'df_model' : results['df_model'],
- 'df_resid' : results['df_resid'],
- 'coef_table' : coef_table,
- 'rmse' : results['rmse'],
- 'f_stat' : f_stat['f-stat'],
- 'f_stat_shape' : '(%d, %d)' % (f_stat['DF X'], f_stat['DF Resid']),
- 'f_stat_p_value' : f_stat['p-value'],
+ 'bannerTop': scom.banner('Summary of Regression Analysis'),
+ 'bannerCoef': scom.banner('Summary of Estimated Coefficients'),
+ 'bannerEnd': scom.banner('End of Summary'),
+ 'formula': formula.getvalue(),
+ 'r2': results['r2'],
+ 'r2_adj': results['r2_adj'],
+ 'nobs': results['nobs'],
+ 'df': results['df'],
+ 'df_model': results['df_model'],
+ 'df_resid': results['df_resid'],
+ 'coef_table': coef_table,
+ 'rmse': results['rmse'],
+ 'f_stat': f_stat['f-stat'],
+ 'f_stat_shape': '(%d, %d)' % (f_stat['DF X'], f_stat['DF Resid']),
+ 'f_stat_p_value': f_stat['p-value'],
}
return template % params
@@ -576,7 +577,6 @@ def summary(self):
def __repr__(self):
return self.summary
-
@cache_readonly
def _time_obs_count(self):
# XXX
@@ -630,7 +630,7 @@ def _set_window(self, window_type, window, min_periods):
self._window = int(window)
self._min_periods = min_periods
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# "Public" results
@cache_readonly
@@ -745,7 +745,7 @@ def y_predict(self):
return Series(self._y_predict_raw[self._valid_obs_labels],
index=self._result_index)
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# "raw" attributes, calculations
@property
@@ -833,9 +833,10 @@ def _cum_xx(self, x):
slicer = lambda df, dt: df.truncate(dt, dt).values
if not self._panel_model:
_get_index = x.index.get_loc
+
def slicer(df, dt):
i = _get_index(dt)
- return df.values[i:i+1, :]
+ return df.values[i:i + 1, :]
last = np.zeros((K, K))
@@ -858,9 +859,10 @@ def _cum_xy(self, x, y):
x_slicer = lambda df, dt: df.truncate(dt, dt).values
if not self._panel_model:
_get_index = x.index.get_loc
+
def x_slicer(df, dt):
i = _get_index(dt)
- return df.values[i:i+1]
+ return df.values[i:i + 1]
_y_get_index = y.index.get_loc
_values = y.values
@@ -871,7 +873,7 @@ def y_slicer(df, dt):
else:
def y_slicer(df, dt):
i = _y_get_index(dt)
- return _values[i:i+1]
+ return _values[i:i + 1]
last = np.zeros(len(x.columns))
for i, date in enumerate(dates):
@@ -996,7 +998,7 @@ def _resid_stats(self):
after=date))
weights_slice = weights.truncate(prior_date, date)
demeaned = Y_slice - np.average(Y_slice, weights=weights_slice)
- SS_total = (weights_slice*demeaned**2).sum()
+ SS_total = (weights_slice * demeaned ** 2).sum()
else:
SS_total = ((Y_slice - Y_slice.mean()) ** 2).sum()
@@ -1008,9 +1010,9 @@ def _resid_stats(self):
uncentered_sst.append(SST_uncentered)
return {
- 'sse' : np.array(sse),
- 'centered_tss' : np.array(sst),
- 'uncentered_tss' : np.array(uncentered_sst),
+ 'sse': np.array(sse),
+ 'centered_tss': np.array(sst),
+ 'uncentered_tss': np.array(uncentered_sst),
}
@cache_readonly
@@ -1166,7 +1168,7 @@ def _results(self):
value = value[self.beta.index[-1]]
elif isinstance(value, DataFrame):
value = value.xs(self.beta.index[-1])
- else: # pragma: no cover
+ else: # pragma: no cover
raise Exception('Problem retrieving %s' % result)
results[result] = value
@@ -1226,6 +1228,7 @@ def _enough_obs(self):
return self._nobs_raw >= max(self._min_periods,
len(self._x.columns) + 1)
+
def _safe_update(d, other):
"""
Combine dictionaries with non-overlapping keys
@@ -1236,6 +1239,7 @@ def _safe_update(d, other):
d[k] = v
+
def _filter_data(lhs, rhs, weights=None):
"""
Cleans the input for single OLS.
@@ -1257,7 +1261,7 @@ def _filter_data(lhs, rhs, weights=None):
lhs = Series(lhs, index=rhs.index)
rhs = _combine_rhs(rhs)
- lhs = DataFrame({'__y__' : lhs}, dtype=float)
+ lhs = DataFrame({'__y__': lhs}, dtype=float)
pre_filt_rhs = rhs.dropna(how='any')
combined = rhs.join(lhs, how='outer')
@@ -1294,12 +1298,12 @@ def _combine_rhs(rhs):
elif isinstance(rhs, dict):
for name, value in rhs.iteritems():
if isinstance(value, Series):
- _safe_update(series, {name : value})
+ _safe_update(series, {name: value})
elif isinstance(value, (dict, DataFrame)):
_safe_update(series, value)
- else: # pragma: no cover
+ else: # pragma: no cover
raise Exception('Invalid RHS data type: %s' % type(value))
- else: # pragma: no cover
+ else: # pragma: no cover
raise Exception('Invalid RHS type: %s' % type(rhs))
if not isinstance(series, DataFrame):
@@ -1311,7 +1315,7 @@ def _combine_rhs(rhs):
# MovingOLS and MovingPanelOLS
def _y_converter(y):
y = y.values.squeeze()
- if y.ndim == 0: # pragma: no cover
+ if y.ndim == 0: # pragma: no cover
return np.array([y])
else:
return y
@@ -1327,4 +1331,3 @@ def f_stat_to_dict(result):
result['p-value'] = p_value
return result
-
diff --git a/pandas/stats/plm.py b/pandas/stats/plm.py
index 7b6f85b12b5db..7dde37822c02b 100644
--- a/pandas/stats/plm.py
+++ b/pandas/stats/plm.py
@@ -20,6 +20,7 @@
import pandas.stats.math as math
from pandas.util.decorators import cache_readonly
+
class PanelOLS(OLS):
"""Implements panel OLS.
@@ -54,7 +55,7 @@ def __init__(self, y, x, weights=None, intercept=True, nw_lags=None,
self._T = len(self._index)
def log(self, msg):
- if self._verbose: # pragma: no cover
+ if self._verbose: # pragma: no cover
print msg
def _prepare_data(self):
@@ -268,7 +269,7 @@ def _add_categorical_dummies(self, panel, cat_mappings):
else:
to_exclude = mapped_name = dummies.columns[0]
- if mapped_name not in dummies.columns: # pragma: no cover
+ if mapped_name not in dummies.columns: # pragma: no cover
raise Exception('%s not in %s' % (to_exclude,
dummies.columns))
@@ -337,7 +338,7 @@ def _r2_raw(self):
if self._use_centered_tss:
SST = ((Y - np.mean(Y)) ** 2).sum()
else:
- SST = (Y**2).sum()
+ SST = (Y ** 2).sum()
return 1 - SSE / SST
@@ -427,6 +428,7 @@ def _time_has_obs(self):
def _nobs(self):
return len(self._y)
+
def _convertDummies(dummies, mapping):
# cleans up the names of the generated dummies
new_items = []
@@ -446,6 +448,7 @@ def _convertDummies(dummies, mapping):
return dummies
+
def _is_numeric(df):
for col in df:
if df[col].dtype.name == 'object':
@@ -453,6 +456,7 @@ def _is_numeric(df):
return True
+
def add_intercept(panel, name='intercept'):
"""
Add column of ones to input panel
@@ -471,6 +475,7 @@ def add_intercept(panel, name='intercept'):
return panel.consolidate()
+
class MovingPanelOLS(MovingOLS, PanelOLS):
"""Implements rolling/expanding panel OLS.
@@ -648,13 +653,14 @@ def _enough_obs(self):
# TODO: write unit tests for this
rank_threshold = len(self._x.columns) + 1
- if self._min_obs < rank_threshold: # pragma: no cover
+ if self._min_obs < rank_threshold: # pragma: no cover
warnings.warn('min_obs is smaller than rank of X matrix')
enough_observations = self._nobs_raw >= self._min_obs
enough_time_periods = self._window_time_obs >= self._min_periods
return enough_time_periods & enough_observations
+
def create_ols_dict(attr):
def attr_getter(self):
d = {}
@@ -666,9 +672,11 @@ def attr_getter(self):
return attr_getter
+
def create_ols_attr(attr):
return property(create_ols_dict(attr))
+
class NonPooledPanelOLS(object):
"""Implements non-pooled panel OLS.
@@ -774,6 +782,7 @@ def _var_beta_panel(y, x, beta, xx, rmse, cluster_axis,
return np.dot(xx_inv, np.dot(xox, xx_inv))
+
def _xx_time_effects(x, y):
"""
Returns X'X - (X'T) (T'T)^-1 (T'X)
@@ -790,5 +799,3 @@ def _xx_time_effects(x, y):
count = count[selector]
return xx - np.dot(xt.T / count, xt)
-
-
diff --git a/pandas/stats/var.py b/pandas/stats/var.py
index e2b5a2ce7c466..a4eb8920a3b40 100644
--- a/pandas/stats/var.py
+++ b/pandas/stats/var.py
@@ -10,6 +10,7 @@
from pandas.stats.math import inv
from pandas.stats.ols import _combine_rhs
+
class VAR(object):
"""
Estimates VAR(p) regression on multivariate time series data
@@ -164,8 +165,8 @@ def granger_causality(self):
p_value_mat = DataFrame(p_value_dict)
return {
- 'f-stat' : f_stat_mat,
- 'p-value' : p_value_mat,
+ 'f-stat': f_stat_mat,
+ 'p-value': p_value_mat,
}
@cache_readonly
@@ -226,13 +227,13 @@ def summary(self):
%(banner_end)s
"""
params = {
- 'banner_top' : common.banner('Summary of VAR'),
- 'banner_coef' : common.banner('Summary of Estimated Coefficients'),
- 'banner_end' : common.banner('End of Summary'),
- 'coef_table' : self.beta,
- 'aic' : self.aic,
- 'bic' : self.bic,
- 'nobs' : self._nobs,
+ 'banner_top': common.banner('Summary of VAR'),
+ 'banner_coef': common.banner('Summary of Estimated Coefficients'),
+ 'banner_end': common.banner('End of Summary'),
+ 'coef_table': self.beta,
+ 'aic': self.aic,
+ 'bic': self.bic,
+ 'nobs': self._nobs,
}
return template % params
@@ -410,8 +411,8 @@ def _ic(self):
k = self._p * (self._k * self._p + 1)
n = self._nobs * self._k
- return {'aic' : 2 * k + n * np.log(RSS / n),
- 'bic' : n * np.log(RSS / n) + k * np.log(n)}
+ return {'aic': 2 * k + n * np.log(RSS / n),
+ 'bic': n * np.log(RSS / n) + k * np.log(n)}
@cache_readonly
def _k(self):
@@ -478,6 +479,7 @@ def _sigma(self):
def __repr__(self):
return self.summary
+
def lag_select(data, max_lags=5, ic=None):
"""
Select number of lags based on a variety of information criteria
@@ -496,6 +498,7 @@ def lag_select(data, max_lags=5, ic=None):
"""
pass
+
class PanelVAR(VAR):
"""
Performs Vector Autoregression on panel data.
@@ -567,14 +570,17 @@ def _prep_panel_data(data):
return Panel.fromDict(data)
+
def _drop_incomplete_rows(array):
mask = np.isfinite(array).all(1)
indices = np.arange(len(array))[mask]
return array.take(indices, 0)
+
def _make_param_name(lag, name):
return 'L%d.%s' % (lag, name)
+
def chain_dot(*matrices):
"""
Returns the dot product of the given matrices.
diff --git a/pandas/tools/describe.py b/pandas/tools/describe.py
index 43e3051da468c..eca5a800b3c6c 100644
--- a/pandas/tools/describe.py
+++ b/pandas/tools/describe.py
@@ -1,5 +1,6 @@
from pandas.core.series import Series
+
def value_range(df):
"""
Return the minimum and maximum of a dataframe in a series object
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 4a50016c39927..d92ed1cb01c42 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -21,6 +21,7 @@
import pandas.lib as lib
+
@Substitution('\nleft : DataFrame')
@Appender(_merge_doc, indents=0)
def merge(left, right, how='inner', on=None, left_on=None, right_on=None,
@@ -145,6 +146,7 @@ def _merger(x, y):
# TODO: transformations??
# TODO: only copy DataFrames when modification necessary
+
class _MergeOperation(object):
"""
Perform a database (SQL) merge operation between two DataFrame objects
@@ -182,7 +184,8 @@ def get_result(self):
# this is a bit kludgy
ldata, rdata = self._get_merge_data()
- # TODO: more efficiently handle group keys to avoid extra consolidation!
+ # TODO: more efficiently handle group keys to avoid extra
+ # consolidation!
join_op = _BlockJoinOperation([ldata, rdata], join_index,
[left_indexer, right_indexer], axis=1,
copy=self.copy)
@@ -427,7 +430,7 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'):
for x in group_sizes:
max_groups *= long(x)
- if max_groups > 2**63: # pragma: no cover
+ if max_groups > 2 ** 63: # pragma: no cover
raise MergeError('Combinatorial explosion! (boom)')
left_group_key, right_group_key, max_groups = \
@@ -437,7 +440,6 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'):
return join_func(left_group_key, right_group_key, max_groups)
-
class _OrderedMerge(_MergeOperation):
def __init__(self, left, right, on=None, by=None, left_on=None,
@@ -452,10 +454,9 @@ def __init__(self, left, right, on=None, by=None, left_on=None,
left_index=left_index,
right_index=right_index,
how='outer', suffixes=suffixes,
- sort=True # sorts when factorizing
+ sort=True # sorts when factorizing
)
-
def get_result(self):
join_index, left_indexer, right_indexer = self._get_join_info()
@@ -503,6 +504,7 @@ def _get_multiindex_indexer(join_keys, index, sort=False):
return left_indexer, right_indexer
+
def _get_single_indexer(join_key, index, sort=False):
left_key, right_key, count = _factorize_keys(join_key, index, sort=sort)
@@ -513,6 +515,7 @@ def _get_single_indexer(join_key, index, sort=False):
return left_indexer, right_indexer
+
def _left_join_on_index(left_ax, right_ax, join_keys, sort=False):
join_index = left_ax
left_indexer = None
@@ -544,10 +547,10 @@ def _right_outer_join(x, y, max_groups):
return left_indexer, right_indexer
_join_functions = {
- 'inner' : lib.inner_join,
- 'left' : lib.left_outer_join,
- 'right' : _right_outer_join,
- 'outer' : lib.full_outer_join,
+ 'inner': lib.inner_join,
+ 'left': lib.left_outer_join,
+ 'right': _right_outer_join,
+ 'outer': lib.full_outer_join,
}
@@ -584,6 +587,7 @@ def _factorize_keys(lk, rk, sort=True):
return llab, rlab, count
+
def _sort_labels(uniques, left, right):
if not isinstance(uniques, np.ndarray):
# tuplesafe
@@ -602,6 +606,7 @@ def _sort_labels(uniques, left, right):
return new_left, new_right
+
class _BlockJoinOperation(object):
"""
BlockJoinOperation made generic for N DataFrames
@@ -713,7 +718,6 @@ def _merge_blocks(self, merge_chunks):
return make_block(out, new_block_items, self.result_items)
-
class _JoinUnit(object):
"""
Blocks plus indexer
@@ -762,6 +766,7 @@ def reindex_block(self, block, axis, ref_items, copy=True):
result.ref_items = ref_items
return result
+
def _may_need_upcasting(blocks):
for block in blocks:
if isinstance(block, (IntBlock, BoolBlock)):
@@ -788,18 +793,21 @@ def _upcast_blocks(blocks):
# use any ref_items
return _consolidate(new_blocks, newb.ref_items)
+
def _get_all_block_kinds(blockmaps):
kinds = set()
for mapping in blockmaps:
kinds |= set(mapping)
return kinds
+
def _get_merge_block_kinds(blockmaps):
kinds = set()
for _, mapping in blockmaps:
kinds |= set(mapping)
return kinds
+
def _get_block_dtype(blocks):
if len(blocks) == 0:
return object
@@ -816,6 +824,7 @@ def _get_block_dtype(blocks):
#----------------------------------------------------------------------
# Concatenate DataFrame objects
+
def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
keys=None, levels=None, names=None, verify_integrity=False):
"""
@@ -884,8 +893,8 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None,
elif join == 'inner':
self.intersect = True
else: # pragma: no cover
- raise ValueError('Only can inner (intersect) or outer (union) join '
- 'the other axis')
+ raise ValueError('Only can inner (intersect) or outer (union) '
+ 'join the other axis')
if isinstance(objs, dict):
if keys is None:
@@ -1149,6 +1158,7 @@ def _maybe_check_integrity(self, concat_index):
def _concat_indexes(indexes):
return indexes[0].append(indexes[1:])
+
def _make_concat_multiindex(indexes, keys, levels=None, names=None):
if ((levels is None and isinstance(keys[0], tuple)) or
(levels is not None and len(levels) > 1)):
@@ -1250,6 +1260,5 @@ def _should_fill(lname, rname):
return lname == rname
-
def _any(x):
return x is not None and len(x) > 0 and any([y is not None for y in x])
diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py
index 146cba82788e9..bed1fe2212746 100644
--- a/pandas/tools/pivot.py
+++ b/pandas/tools/pivot.py
@@ -126,6 +126,7 @@ def pivot_table(data, values=None, rows=None, cols=None, aggfunc='mean',
DataFrame.pivot_table = pivot_table
+
def _add_margins(table, data, values, rows=None, cols=None, aggfunc=np.mean):
grand_margin = {}
for k, v in data[values].iteritems():
@@ -142,7 +143,6 @@ def _add_margins(table, data, values, rows=None, cols=None, aggfunc=np.mean):
table_pieces = []
margin_keys = []
-
def _all_key(key):
return (key, 'All') + ('',) * (len(cols) - 1)
@@ -199,6 +199,7 @@ def _all_key(key):
return result
+
def _convert_by(by):
if by is None:
by = []
@@ -209,6 +210,7 @@ def _convert_by(by):
by = list(by)
return by
+
def crosstab(rows, cols, values=None, rownames=None, colnames=None,
aggfunc=None, margins=False):
"""
@@ -284,6 +286,7 @@ def crosstab(rows, cols, values=None, rownames=None, colnames=None,
aggfunc=aggfunc, margins=margins)
return table
+
def _get_names(arrs, names, prefix='row'):
if names is None:
names = []
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 34754a23ba5b4..a2cc21e23c47b 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -15,14 +15,15 @@
from pandas.tseries.frequencies import get_period_alias, get_base_alias
from pandas.tseries.offsets import DateOffset
-try: # mpl optional
+try: # mpl optional
import pandas.tseries.converter as conv
conv.register()
except ImportError:
pass
+
def _get_standard_kind(kind):
- return {'density' : 'kde'}.get(kind, kind)
+ return {'density': 'kde'}.get(kind, kind)
def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
@@ -120,8 +121,8 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
# ax.grid(b=grid)
axes[0, 0].yaxis.set_visible(False)
- axes[n-1, n-1].xaxis.set_visible(False)
- axes[n-1, n-1].yaxis.set_visible(False)
+ axes[n - 1, n - 1].xaxis.set_visible(False)
+ axes[n - 1, n - 1].yaxis.set_visible(False)
axes[0, n - 1].yaxis.tick_right()
for ax in axes.flat:
@@ -135,10 +136,12 @@ def _gca():
import matplotlib.pyplot as plt
return plt.gca()
+
def _gcf():
import matplotlib.pyplot as plt
return plt.gcf()
+
def _get_marker_compat(marker):
import matplotlib.lines as mlines
import matplotlib as mpl
@@ -148,6 +151,7 @@ def _get_marker_compat(marker):
return 'o'
return marker
+
def radviz(frame, class_column, ax=None, **kwds):
"""RadViz - a multivariate data visualization algorithm
@@ -232,6 +236,7 @@ def normalize(series):
ax.axis('equal')
return ax
+
def andrews_curves(data, class_column, ax=None, samples=200):
"""
Parameters:
@@ -243,6 +248,7 @@ def andrews_curves(data, class_column, ax=None, samples=200):
from math import sqrt, pi, sin, cos
import matplotlib.pyplot as plt
import random
+
def function(amplitudes):
def f(x):
x1 = amplitudes[0]
@@ -256,6 +262,7 @@ def f(x):
result += amplitudes[-1] * sin(harmonic * x)
return result
return f
+
def random_color(column):
random.seed(column)
return [random.random() for _ in range(3)]
@@ -280,6 +287,7 @@ def random_color(column):
ax.grid()
return ax
+
def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
"""Bootstrap plot.
@@ -289,7 +297,8 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
fig: matplotlib figure object, optional
size: number of data points to consider during each sampling
samples: number of times the bootstrap procedure is performed
- kwds: optional keyword arguments for plotting commands, must be accepted by both hist and plot
+ kwds: optional keyword arguments for plotting commands, must be accepted
+ by both hist and plot
Returns:
--------
@@ -336,6 +345,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
plt.setp(axis.get_yticklabels(), fontsize=8)
return fig
+
def parallel_coordinates(data, class_column, cols=None, ax=None, **kwds):
"""Parallel coordinates plotting.
@@ -353,6 +363,7 @@ def parallel_coordinates(data, class_column, cols=None, ax=None, **kwds):
"""
import matplotlib.pyplot as plt
import random
+
def random_color(column):
random.seed(column)
return [random.random() for _ in range(3)]
@@ -392,6 +403,7 @@ def random_color(column):
ax.grid()
return ax
+
def lag_plot(series, ax=None, **kwds):
"""Lag plot for time series.
@@ -416,6 +428,7 @@ def lag_plot(series, ax=None, **kwds):
ax.scatter(y1, y2, **kwds)
return ax
+
def autocorrelation_plot(series, ax=None):
"""Autocorrelation plot for time series.
@@ -435,23 +448,25 @@ def autocorrelation_plot(series, ax=None):
ax = plt.gca(xlim=(1, n), ylim=(-1.0, 1.0))
mean = np.mean(data)
c0 = np.sum((data - mean) ** 2) / float(n)
+
def r(h):
return ((data[:n - h] - mean) * (data[h:] - mean)).sum() / float(n) / c0
x = np.arange(n) + 1
y = map(r, x)
z95 = 1.959963984540054
z99 = 2.5758293035489004
- ax.axhline(y=z99/np.sqrt(n), linestyle='--', color='grey')
- ax.axhline(y=z95/np.sqrt(n), color='grey')
+ ax.axhline(y=z99 / np.sqrt(n), linestyle='--', color='grey')
+ ax.axhline(y=z95 / np.sqrt(n), color='grey')
ax.axhline(y=0.0, color='black')
- ax.axhline(y=-z95/np.sqrt(n), color='grey')
- ax.axhline(y=-z99/np.sqrt(n), linestyle='--', color='grey')
+ ax.axhline(y=-z95 / np.sqrt(n), color='grey')
+ ax.axhline(y=-z99 / np.sqrt(n), linestyle='--', color='grey')
ax.set_xlabel("Lag")
ax.set_ylabel("Autocorrelation")
ax.plot(x, y)
ax.grid()
return ax
+
def grouped_hist(data, column=None, by=None, ax=None, bins=50, log=False,
figsize=None, layout=None, sharex=False, sharey=False,
rot=90):
@@ -590,7 +605,7 @@ def _maybe_right_yaxis(self, ax):
orig_ax, new_ax = ax, ax.twinx()
orig_ax.right_ax, new_ax.left_ax = new_ax, orig_ax
- if len(orig_ax.get_lines()) == 0: # no data on left y
+ if len(orig_ax.get_lines()) == 0: # no data on left y
orig_ax.get_yaxis().set_visible(False)
if len(new_ax.get_lines()) == 0:
@@ -795,6 +810,7 @@ def _get_style(self, i, col_name):
return style or None
+
class KdePlot(MPLPlot):
def __init__(self, data, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
@@ -830,6 +846,7 @@ def _post_plot_logic(self):
for ax in self.axes:
ax.legend(loc='best')
+
class LinePlot(MPLPlot):
def __init__(self, data, **kwargs):
@@ -861,9 +878,9 @@ def _use_dynamic_x(self):
ax = self._get_ax(0)
ax_freq = getattr(ax, 'freq', None)
- if freq is None: # convert irregular if axes has freq info
+ if freq is None: # convert irregular if axes has freq info
freq = ax_freq
- else: # do not use tsplot if irregular was plotted first
+ else: # do not use tsplot if irregular was plotted first
if (ax_freq is None) and (len(ax.get_lines()) > 0):
return False
@@ -887,6 +904,7 @@ def _make_plot(self):
x = self._get_xticks(convert_period=True)
has_colors, colors = self._get_colors()
+
def _maybe_add_color(kwargs, style, i):
if (not has_colors and
(style is None or re.match('[a-z]+', style) is None)
@@ -945,14 +963,14 @@ def to_leg_label(label, i):
return label
if isinstance(data, Series):
- ax = self._get_ax(0) #self.axes[0]
+ ax = self._get_ax(0) # self.axes[0]
style = self.style or ''
label = com.pprint_thing(self.label)
kwds = kwargs.copy()
_maybe_add_color(kwds, style, 0)
- newlines = tsplot(data, plotf, ax=ax, label=label, style=self.style,
- **kwds)
+ newlines = tsplot(data, plotf, ax=ax, label=label,
+ style=self.style, **kwds)
ax.grid(self.grid)
lines.append(newlines[0])
leg_label = to_leg_label(label, 0)
@@ -1059,7 +1077,7 @@ def _post_plot_logic(self):
class BarPlot(MPLPlot):
- _default_rot = {'bar' : 90, 'barh' : 0}
+ _default_rot = {'bar': 90, 'barh': 0}
def __init__(self, data, **kwargs):
self.stacked = kwargs.pop('stacked', False)
@@ -1088,7 +1106,7 @@ def _make_plot(self):
rects = []
labels = []
- ax = self._get_ax(0) #self.axes[0]
+ ax = self._get_ax(0) # self.axes[0]
bar_f = self.bar_f
@@ -1102,7 +1120,7 @@ def _make_plot(self):
kwds['color'] = colors[i % len(colors)]
if self.subplots:
- ax = self._get_ax(i) #self.axes[i]
+ ax = self._get_ax(i) # self.axes[i]
rect = bar_f(ax, self.ax_pos, y, 0.5, start=pos_prior, **kwds)
ax.set_title(label)
elif self.stacked:
@@ -1149,6 +1167,7 @@ def _post_plot_logic(self):
#if self.subplots and self.legend:
# self.axes[0].legend(loc='best')
+
class BoxPlot(MPLPlot):
pass
@@ -1159,9 +1178,10 @@ class HistPlot(MPLPlot):
def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
sharey=False, use_index=True, figsize=None, grid=False,
- legend=True, rot=None, ax=None, style=None, title=None, xlim=None,
- ylim=None, logy=False, xticks=None, yticks=None, kind='line',
- sort_columns=False, fontsize=None, secondary_y=False, **kwds):
+ legend=True, rot=None, ax=None, style=None, title=None,
+ xlim=None, ylim=None, logy=False, xticks=None, yticks=None,
+ kind='line', sort_columns=False, fontsize=None,
+ secondary_y=False, **kwds):
"""
Make line or bar plot of DataFrame's series with the index on the x-axis
@@ -1255,6 +1275,7 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
else:
return plot_obj.axes[0]
+
def plot_series(series, label=None, kind='line', use_index=True, rot=None,
xticks=None, yticks=None, xlim=None, ylim=None,
ax=None, style=None, grid=None, logy=False, secondary_y=False,
@@ -1330,6 +1351,7 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
return plot_obj.ax
+
def boxplot(data, column=None, by=None, ax=None, fontsize=None,
rot=0, grid=True, figsize=None, **kwds):
"""
@@ -1354,7 +1376,7 @@ def boxplot(data, column=None, by=None, ax=None, fontsize=None,
"""
from pandas import Series, DataFrame
if isinstance(data, Series):
- data = DataFrame({'x' : data})
+ data = DataFrame({'x': data})
column = 'x'
def plot_group(grouped, ax):
@@ -1411,6 +1433,7 @@ def plot_group(grouped, ax):
fig.subplots_adjust(bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2)
return ret
+
def format_date_labels(ax, rot):
# mini version of autofmt_xdate
try:
@@ -1419,7 +1442,7 @@ def format_date_labels(ax, rot):
label.set_rotation(rot)
fig = ax.get_figure()
fig.subplots_adjust(bottom=0.2)
- except Exception: # pragma: no cover
+ except Exception: # pragma: no cover
pass
@@ -1515,6 +1538,7 @@ def hist_frame(data, grid=True, xlabelsize=None, xrot=None,
return axes
+
def hist_series(self, ax=None, grid=True, xlabelsize=None, xrot=None,
ylabelsize=None, yrot=None, **kwds):
"""
@@ -1563,6 +1587,7 @@ def hist_series(self, ax=None, grid=True, xlabelsize=None, xrot=None,
return ax
+
def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None,
rot=0, grid=True, figsize=None, **kwds):
"""
@@ -1628,6 +1653,7 @@ def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None,
grid=grid, figsize=figsize, **kwds)
return ret
+
def _grouped_plot(plotf, data, column=None, by=None, numeric_only=True,
figsize=None, sharex=True, sharey=True, layout=None,
rot=0, ax=None):
@@ -1672,6 +1698,7 @@ def _grouped_plot(plotf, data, column=None, by=None, numeric_only=True,
return fig, axes
+
def _grouped_plot_by_column(plotf, data, columns=None, by=None,
numeric_only=True, grid=False,
figsize=None, ax=None):
@@ -1710,6 +1737,7 @@ def _grouped_plot_by_column(plotf, data, columns=None, by=None,
return fig, axes
+
def _get_layout(nplots):
if nplots == 1:
return (1, 1)
@@ -1729,6 +1757,7 @@ def _get_layout(nplots):
# copied from matplotlib/pyplot.py for compatibility with matplotlib < 1.0
+
def _subplots(nrows=1, ncols=1, sharex=False, sharey=False, squeeze=True,
subplot_kw=None, ax=None, secondary_y=False, data=None,
**fig_kw):
@@ -1817,7 +1846,7 @@ def _subplots(nrows=1, ncols=1, sharex=False, sharey=False, squeeze=True,
# Create empty object array to hold all axes. It's easiest to make it 1-d
# so we can just append subplots upon creation, and then
- nplots = nrows*ncols
+ nplots = nrows * ncols
axarr = np.empty(nplots, dtype=object)
def on_right(i):
@@ -1854,18 +1883,18 @@ def on_right(i):
if nplots > 1:
if sharex and nrows > 1:
for i, ax in enumerate(axarr):
- if np.ceil(float(i + 1) / ncols) < nrows: # only last row
+ if np.ceil(float(i + 1) / ncols) < nrows: # only last row
[label.set_visible(False) for label in ax.get_xticklabels()]
if sharey and ncols > 1:
for i, ax in enumerate(axarr):
- if (i % ncols) != 0: # only first column
+ if (i % ncols) != 0: # only first column
[label.set_visible(False) for label in ax.get_yticklabels()]
if squeeze:
# Reshape the array to have the final desired dimension (nrow,ncol),
# though discarding unneeded dimensions that equal 1. If we only have
# one subplot, just return it instead of a 1-element array.
- if nplots==1:
+ if nplots == 1:
axes = axarr[0]
else:
axes = axarr.reshape(nrows, ncols).squeeze()
diff --git a/pandas/tools/tile.py b/pandas/tools/tile.py
index 7a2101a967942..cc4c4192cb737 100644
--- a/pandas/tools/tile.py
+++ b/pandas/tools/tile.py
@@ -66,7 +66,7 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3,
if not np.iterable(bins):
if np.isscalar(bins) and bins < 1:
raise ValueError("`bins` should be a positive integer.")
- try: # for array-like
+ try: # for array-like
sz = x.size
except AttributeError:
x = np.asarray(x)
@@ -79,13 +79,13 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3,
rng = (nanops.nanmin(x), nanops.nanmax(x))
mn, mx = [mi + 0.0 for mi in rng]
- if mn == mx: # adjust end points before binning
+ if mn == mx: # adjust end points before binning
mn -= .001 * mn
mx += .001 * mx
- bins = np.linspace(mn, mx, bins+1, endpoint=True)
- else: # adjust end points after binning
- bins = np.linspace(mn, mx, bins+1, endpoint=True)
- adj = (mx - mn) * 0.001 # 0.1% of the range
+ bins = np.linspace(mn, mx, bins + 1, endpoint=True)
+ else: # adjust end points after binning
+ bins = np.linspace(mn, mx, bins + 1, endpoint=True)
+ adj = (mx - mn) * 0.001 # 0.1% of the range
if right:
bins[0] -= adj
else:
@@ -101,7 +101,6 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3,
include_lowest=include_lowest)
-
def qcut(x, q, labels=None, retbins=False, precision=3):
"""
Quantile-based discretization function. Discretize variable into
@@ -210,6 +209,7 @@ def _format_label(x, precision=3):
else:
return str(x)
+
def _trim_zeros(x):
while len(x) > 1 and x[-1] == '0':
x = x[:-1]
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index e3a49d03f72ad..78455a5e46259 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -20,6 +20,7 @@
from pandas.tseries.frequencies import FreqGroup
from pandas.tseries.period import Period, PeriodIndex
+
def register():
units.registry[pydt.time] = TimeConverter()
units.registry[lib.Timestamp] = DatetimeConverter()
@@ -27,11 +28,13 @@ def register():
units.registry[pydt.datetime] = DatetimeConverter()
units.registry[Period] = PeriodConverter()
+
def _to_ordinalf(tm):
tot_sec = (tm.hour * 3600 + tm.minute * 60 + tm.second +
float(tm.microsecond / 1e6))
return tot_sec
+
def time2num(d):
if isinstance(d, basestring):
parsed = tools.to_datetime(d)
@@ -42,6 +45,7 @@ def time2num(d):
return _to_ordinalf(d)
return d
+
class TimeConverter(units.ConversionInterface):
@staticmethod
@@ -69,6 +73,7 @@ def axisinfo(unit, axis):
def default_units(x, axis):
return 'time'
+
### time formatter
class TimeFormatter(Formatter):
@@ -90,6 +95,7 @@ def __call__(self, x, pos=0):
### Period Conversion
+
class PeriodConverter(dates.DateConverter):
@staticmethod
@@ -106,6 +112,7 @@ def convert(values, units, axis):
return [get_datevalue(x, axis.freq) for x in values]
return values
+
def get_datevalue(date, freq):
if isinstance(date, Period):
return date.asfreq(freq).ordinal
@@ -119,9 +126,10 @@ def get_datevalue(date, freq):
raise ValueError("Unrecognizable date '%s'" % date)
HOURS_PER_DAY = 24.
-MINUTES_PER_DAY = 60.*HOURS_PER_DAY
-SECONDS_PER_DAY = 60.*MINUTES_PER_DAY
-MUSECONDS_PER_DAY = 1e6*SECONDS_PER_DAY
+MINUTES_PER_DAY = 60. * HOURS_PER_DAY
+SECONDS_PER_DAY = 60. * MINUTES_PER_DAY
+MUSECONDS_PER_DAY = 1e6 * SECONDS_PER_DAY
+
def _dt_to_float_ordinal(dt):
"""
@@ -132,6 +140,7 @@ def _dt_to_float_ordinal(dt):
base = dates.date2num(dt)
return base
+
### Datetime Conversion
class DatetimeConverter(dates.DateConverter):
@@ -184,8 +193,8 @@ def axisinfo(unit, axis):
datemin = pydt.date(2000, 1, 1)
datemax = pydt.date(2010, 1, 1)
- return units.AxisInfo( majloc=majloc, majfmt=majfmt, label='',
- default_limits=(datemin, datemax))
+ return units.AxisInfo(majloc=majloc, majfmt=majfmt, label='',
+ default_limits=(datemin, datemax))
class PandasAutoDateFormatter(dates.AutoDateFormatter):
@@ -196,23 +205,23 @@ def __init__(self, locator, tz=None, defaultfmt='%Y-%m-%d'):
if self._tz is dates.UTC:
self._tz._utcoffset = self._tz.utcoffset(None)
self.scaled = {
- 365.0 : '%Y',
- 30. : '%b %Y',
- 1.0 : '%b %d %Y',
- 1. / 24. : '%H:%M:%S',
- 1. / 24. / 3600. / 1000. : '%H:%M:%S.%f'
+ 365.0: '%Y',
+ 30.: '%b %Y',
+ 1.0: '%b %d %Y',
+ 1. / 24.: '%H:%M:%S',
+ 1. / 24. / 3600. / 1000.: '%H:%M:%S.%f'
}
def _get_fmt(self, x):
- scale = float( self._locator._get_unit() )
+ scale = float(self._locator._get_unit())
fmt = self.defaultfmt
for k in sorted(self.scaled):
- if k >= scale:
- fmt = self.scaled[k]
- break
+ if k >= scale:
+ fmt = self.scaled[k]
+ break
return fmt
@@ -221,6 +230,7 @@ def __call__(self, x, pos=0):
self._formatter = dates.DateFormatter(fmt, self._tz)
return self._formatter(x, pos)
+
class PandasAutoDateLocator(dates.AutoDateLocator):
def get_locator(self, dmin, dmax):
@@ -245,6 +255,7 @@ def get_locator(self, dmin, dmax):
def _get_unit(self):
return MilliSecondLocator.get_unit_generic(self._freq)
+
class MilliSecondLocator(dates.DateLocator):
UNIT = 1. / (24 * 3600 * 1000)
@@ -265,10 +276,12 @@ def get_unit_generic(freq):
def __call__(self):
# if no data have been set, this will tank with a ValueError
- try: dmin, dmax = self.viewlim_to_dt()
- except ValueError: return []
+ try:
+ dmin, dmax = self.viewlim_to_dt()
+ except ValueError:
+ return []
- if dmin>dmax:
+ if dmin > dmax:
dmax, dmin = dmin, dmax
delta = relativedelta(dmax, dmin)
@@ -276,13 +289,13 @@ def __call__(self):
try:
start = dmin - delta
except ValueError:
- start = _from_ordinal( 1.0 )
+ start = _from_ordinal(1.0)
try:
stop = dmax + delta
except ValueError:
# The magic number!
- stop = _from_ordinal( 3652059.9999999 )
+ stop = _from_ordinal(3652059.9999999)
nmax, nmin = dates.date2num((dmax, dmin))
@@ -306,7 +319,7 @@ def __call__(self):
freq = '%dL' % self._get_interval()
tz = self.tz.tzname(None)
- st = _from_ordinal(dates.date2num(dmin)) # strip tz
+ st = _from_ordinal(dates.date2num(dmin)) # strip tz
ed = _from_ordinal(dates.date2num(dmax))
all_dates = date_range(start=st, end=ed, freq=freq, tz=tz).asobject
@@ -328,7 +341,7 @@ def autoscale(self):
Set the view limits to include the data range.
"""
dmin, dmax = self.datalim_to_dt()
- if dmin>dmax:
+ if dmin > dmax:
dmax, dmin = dmin, dmax
delta = relativedelta(dmax, dmin)
@@ -343,7 +356,7 @@ def autoscale(self):
stop = dmax + delta
except ValueError:
# The magic number!
- stop = _from_ordinal( 3652059.9999999 )
+ stop = _from_ordinal(3652059.9999999)
dmin, dmax = self.datalim_to_dt()
@@ -357,18 +370,19 @@ def _from_ordinal(x, tz=None):
ix = int(x)
dt = datetime.fromordinal(ix)
remainder = float(x) - ix
- hour, remainder = divmod(24*remainder, 1)
- minute, remainder = divmod(60*remainder, 1)
- second, remainder = divmod(60*remainder, 1)
- microsecond = int(1e6*remainder)
- if microsecond<10: microsecond=0 # compensate for rounding errors
+ hour, remainder = divmod(24 * remainder, 1)
+ minute, remainder = divmod(60 * remainder, 1)
+ second, remainder = divmod(60 * remainder, 1)
+ microsecond = int(1e6 * remainder)
+ if microsecond < 10:
+ microsecond = 0 # compensate for rounding errors
dt = datetime(dt.year, dt.month, dt.day, int(hour), int(minute),
int(second), microsecond)
if tz is not None:
dt = dt.astimezone(tz)
if microsecond > 999990: # compensate for rounding errors
- dt += timedelta(microseconds = 1e6 - microsecond)
+ dt += timedelta(microseconds=1e6 - microsecond)
return dt
@@ -378,6 +392,7 @@ def _from_ordinal(x, tz=None):
#---- --- Locators ---
##### -------------------------------------------------------------------------
+
def _get_default_annual_spacing(nyears):
"""
Returns a default spacing between consecutive ticks for annual data.
@@ -399,6 +414,7 @@ def _get_default_annual_spacing(nyears):
(min_spacing, maj_spacing) = (factor * 20, factor * 100)
return (min_spacing, maj_spacing)
+
def period_break(dates, period):
"""
Returns the indices where the given period changes.
@@ -414,6 +430,7 @@ def period_break(dates, period):
previous = getattr(dates - 1, period)
return (current - previous).nonzero()[0]
+
def has_level_label(label_flags, vmin):
"""
Returns true if the ``label_flags`` indicate there is at least one label
@@ -429,6 +446,7 @@ def has_level_label(label_flags, vmin):
else:
return True
+
def _daily_finder(vmin, vmax, freq):
periodsperday = -1
@@ -439,7 +457,7 @@ def _daily_finder(vmin, vmax, freq):
periodsperday = 24 * 60
elif freq == FreqGroup.FR_HR:
periodsperday = 24
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError("unexpected frequency: %s" % freq)
periodsperyear = 365 * periodsperday
periodspermonth = 28 * periodsperday
@@ -453,7 +471,7 @@ def _daily_finder(vmin, vmax, freq):
elif frequencies.get_freq_group(freq) == FreqGroup.FR_WK:
periodsperyear = 52
periodspermonth = 3
- else: # pragma: no cover
+ else: # pragma: no cover
raise ValueError("unexpected frequency")
# save this for later usage
@@ -489,7 +507,7 @@ def first_label(label_flags):
def _hour_finder(label_interval, force_year_start):
_hour = dates_.hour
- _prev_hour = (dates_-1).hour
+ _prev_hour = (dates_ - 1).hour
hour_start = (_hour - _prev_hour) != 0
info_maj[day_start] = True
info_min[hour_start & (_hour % label_interval == 0)] = True
@@ -503,7 +521,7 @@ def _hour_finder(label_interval, force_year_start):
def _minute_finder(label_interval):
hour_start = period_break(dates_, 'hour')
_minute = dates_.minute
- _prev_minute = (dates_-1).minute
+ _prev_minute = (dates_ - 1).minute
minute_start = (_minute - _prev_minute) != 0
info_maj[hour_start] = True
info_min[minute_start & (_minute % label_interval == 0)] = True
@@ -516,7 +534,7 @@ def _minute_finder(label_interval):
def _second_finder(label_interval):
minute_start = period_break(dates_, 'minute')
_second = dates_.second
- _prev_second = (dates_-1).second
+ _prev_second = (dates_ - 1).second
second_start = (_second - _prev_second) != 0
info['maj'][minute_start] = True
info['min'][second_start & (_second % label_interval == 0)] = True
@@ -748,6 +766,7 @@ def _quarterly_finder(vmin, vmax, freq):
#..............
return info
+
def _annual_finder(vmin, vmax, freq):
(vmin, vmax) = (int(vmin), int(vmax + 1))
span = vmax - vmin + 1
@@ -767,6 +786,7 @@ def _annual_finder(vmin, vmax, freq):
#..............
return info
+
def get_finder(freq):
if isinstance(freq, basestring):
freq = frequencies.get_freq(freq)
@@ -776,14 +796,15 @@ def get_finder(freq):
return _annual_finder
elif fgroup == FreqGroup.FR_QTR:
return _quarterly_finder
- elif freq ==FreqGroup.FR_MTH:
+ elif freq == FreqGroup.FR_MTH:
return _monthly_finder
elif ((freq >= FreqGroup.FR_BUS) or fgroup == FreqGroup.FR_WK):
return _daily_finder
- else: # pragma: no cover
+ else: # pragma: no cover
errmsg = "Unsupported frequency: %s" % (freq)
raise NotImplementedError(errmsg)
+
class TimeSeries_DateLocator(Locator):
"""
Locates the ticks along an axis controlled by a :class:`Series`.
@@ -839,7 +860,7 @@ def __call__(self):
vmin, vmax = vmax, vmin
if self.isdynamic:
locs = self._get_default_locs(vmin, vmax)
- else: # pragma: no cover
+ else: # pragma: no cover
base = self.base
(d, m) = divmod(vmin, base)
vmin = (d + 1) * base
@@ -921,11 +942,10 @@ def set_locs(self, locs):
if vmax < vmin:
(vmin, vmax) = (vmax, vmin)
self._set_default_format(vmin, vmax)
- #
+
def __call__(self, x, pos=0):
if self.formatdict is None:
return ''
else:
fmt = self.formatdict.pop(x, '')
return Period(ordinal=int(x), freq=self.freq).strftime(fmt)
-
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 77f03dc4d1279..b9b2d28e1595a 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -9,6 +9,7 @@
import pandas.core.common as com
import pandas.lib as lib
+
class FreqGroup(object):
FR_ANN = 1000
FR_QTR = 2000
@@ -20,6 +21,7 @@ class FreqGroup(object):
FR_MIN = 8000
FR_SEC = 9000
+
def get_to_timestamp_base(base):
if base <= FreqGroup.FR_WK:
return FreqGroup.FR_DAY
@@ -27,18 +29,21 @@ def get_to_timestamp_base(base):
return FreqGroup.FR_SEC
return base
+
def get_freq_group(freq):
if isinstance(freq, basestring):
base, mult = get_freq_code(freq)
freq = base
return (freq // 1000) * 1000
+
def get_freq(freq):
if isinstance(freq, basestring):
base, mult = get_freq_code(freq)
freq = base
return freq
+
def get_freq_code(freqstr):
"""
@@ -93,171 +98,171 @@ def _get_freq_str(base, mult=1):
QuarterEnd, BQuarterBegin, BQuarterEnd)
_offset_map = {
- 'D' : Day(),
- 'B' : BDay(),
- 'H' : Hour(),
- 'T' : Minute(),
- 'S' : Second(),
- 'L' : Milli(),
- 'U' : Micro(),
- None : None,
+ 'D': Day(),
+ 'B': BDay(),
+ 'H': Hour(),
+ 'T': Minute(),
+ 'S': Second(),
+ 'L': Milli(),
+ 'U': Micro(),
+ None: None,
# Monthly - Calendar
- 'M' : MonthEnd(),
- 'MS' : MonthBegin(),
+ 'M': MonthEnd(),
+ 'MS': MonthBegin(),
# Monthly - Business
- 'BM' : BMonthEnd(),
- 'BMS' : BMonthBegin(),
+ 'BM': BMonthEnd(),
+ 'BMS': BMonthBegin(),
# Annual - Calendar
- 'A-JAN' : YearEnd(month=1),
- 'A-FEB' : YearEnd(month=2),
- 'A-MAR' : YearEnd(month=3),
- 'A-APR' : YearEnd(month=4),
- 'A-MAY' : YearEnd(month=5),
- 'A-JUN' : YearEnd(month=6),
- 'A-JUL' : YearEnd(month=7),
- 'A-AUG' : YearEnd(month=8),
- 'A-SEP' : YearEnd(month=9),
- 'A-OCT' : YearEnd(month=10),
- 'A-NOV' : YearEnd(month=11),
- 'A-DEC' : YearEnd(month=12),
+ 'A-JAN': YearEnd(month=1),
+ 'A-FEB': YearEnd(month=2),
+ 'A-MAR': YearEnd(month=3),
+ 'A-APR': YearEnd(month=4),
+ 'A-MAY': YearEnd(month=5),
+ 'A-JUN': YearEnd(month=6),
+ 'A-JUL': YearEnd(month=7),
+ 'A-AUG': YearEnd(month=8),
+ 'A-SEP': YearEnd(month=9),
+ 'A-OCT': YearEnd(month=10),
+ 'A-NOV': YearEnd(month=11),
+ 'A-DEC': YearEnd(month=12),
# Annual - Calendar (start)
- 'AS-JAN' : YearBegin(month=1),
- 'AS-FEB' : YearBegin(month=2),
- 'AS-MAR' : YearBegin(month=3),
- 'AS-APR' : YearBegin(month=4),
- 'AS-MAY' : YearBegin(month=5),
- 'AS-JUN' : YearBegin(month=6),
- 'AS-JUL' : YearBegin(month=7),
- 'AS-AUG' : YearBegin(month=8),
- 'AS-SEP' : YearBegin(month=9),
- 'AS-OCT' : YearBegin(month=10),
- 'AS-NOV' : YearBegin(month=11),
- 'AS-DEC' : YearBegin(month=12),
+ 'AS-JAN': YearBegin(month=1),
+ 'AS-FEB': YearBegin(month=2),
+ 'AS-MAR': YearBegin(month=3),
+ 'AS-APR': YearBegin(month=4),
+ 'AS-MAY': YearBegin(month=5),
+ 'AS-JUN': YearBegin(month=6),
+ 'AS-JUL': YearBegin(month=7),
+ 'AS-AUG': YearBegin(month=8),
+ 'AS-SEP': YearBegin(month=9),
+ 'AS-OCT': YearBegin(month=10),
+ 'AS-NOV': YearBegin(month=11),
+ 'AS-DEC': YearBegin(month=12),
# Annual - Business
- 'BA-JAN' : BYearEnd(month=1),
- 'BA-FEB' : BYearEnd(month=2),
- 'BA-MAR' : BYearEnd(month=3),
- 'BA-APR' : BYearEnd(month=4),
- 'BA-MAY' : BYearEnd(month=5),
- 'BA-JUN' : BYearEnd(month=6),
- 'BA-JUL' : BYearEnd(month=7),
- 'BA-AUG' : BYearEnd(month=8),
- 'BA-SEP' : BYearEnd(month=9),
- 'BA-OCT' : BYearEnd(month=10),
- 'BA-NOV' : BYearEnd(month=11),
- 'BA-DEC' : BYearEnd(month=12),
+ 'BA-JAN': BYearEnd(month=1),
+ 'BA-FEB': BYearEnd(month=2),
+ 'BA-MAR': BYearEnd(month=3),
+ 'BA-APR': BYearEnd(month=4),
+ 'BA-MAY': BYearEnd(month=5),
+ 'BA-JUN': BYearEnd(month=6),
+ 'BA-JUL': BYearEnd(month=7),
+ 'BA-AUG': BYearEnd(month=8),
+ 'BA-SEP': BYearEnd(month=9),
+ 'BA-OCT': BYearEnd(month=10),
+ 'BA-NOV': BYearEnd(month=11),
+ 'BA-DEC': BYearEnd(month=12),
# Annual - Business (Start)
- 'BAS-JAN' : BYearBegin(month=1),
- 'BAS-FEB' : BYearBegin(month=2),
- 'BAS-MAR' : BYearBegin(month=3),
- 'BAS-APR' : BYearBegin(month=4),
- 'BAS-MAY' : BYearBegin(month=5),
- 'BAS-JUN' : BYearBegin(month=6),
- 'BAS-JUL' : BYearBegin(month=7),
- 'BAS-AUG' : BYearBegin(month=8),
- 'BAS-SEP' : BYearBegin(month=9),
- 'BAS-OCT' : BYearBegin(month=10),
- 'BAS-NOV' : BYearBegin(month=11),
- 'BAS-DEC' : BYearBegin(month=12),
+ 'BAS-JAN': BYearBegin(month=1),
+ 'BAS-FEB': BYearBegin(month=2),
+ 'BAS-MAR': BYearBegin(month=3),
+ 'BAS-APR': BYearBegin(month=4),
+ 'BAS-MAY': BYearBegin(month=5),
+ 'BAS-JUN': BYearBegin(month=6),
+ 'BAS-JUL': BYearBegin(month=7),
+ 'BAS-AUG': BYearBegin(month=8),
+ 'BAS-SEP': BYearBegin(month=9),
+ 'BAS-OCT': BYearBegin(month=10),
+ 'BAS-NOV': BYearBegin(month=11),
+ 'BAS-DEC': BYearBegin(month=12),
# Quarterly - Calendar
# 'Q' : QuarterEnd(startingMonth=3),
- 'Q-JAN' : QuarterEnd(startingMonth=1),
- 'Q-FEB' : QuarterEnd(startingMonth=2),
- 'Q-MAR' : QuarterEnd(startingMonth=3),
- 'Q-APR' : QuarterEnd(startingMonth=4),
- 'Q-MAY' : QuarterEnd(startingMonth=5),
- 'Q-JUN' : QuarterEnd(startingMonth=6),
- 'Q-JUL' : QuarterEnd(startingMonth=7),
- 'Q-AUG' : QuarterEnd(startingMonth=8),
- 'Q-SEP' : QuarterEnd(startingMonth=9),
- 'Q-OCT' : QuarterEnd(startingMonth=10),
- 'Q-NOV' : QuarterEnd(startingMonth=11),
- 'Q-DEC' : QuarterEnd(startingMonth=12),
+ 'Q-JAN': QuarterEnd(startingMonth=1),
+ 'Q-FEB': QuarterEnd(startingMonth=2),
+ 'Q-MAR': QuarterEnd(startingMonth=3),
+ 'Q-APR': QuarterEnd(startingMonth=4),
+ 'Q-MAY': QuarterEnd(startingMonth=5),
+ 'Q-JUN': QuarterEnd(startingMonth=6),
+ 'Q-JUL': QuarterEnd(startingMonth=7),
+ 'Q-AUG': QuarterEnd(startingMonth=8),
+ 'Q-SEP': QuarterEnd(startingMonth=9),
+ 'Q-OCT': QuarterEnd(startingMonth=10),
+ 'Q-NOV': QuarterEnd(startingMonth=11),
+ 'Q-DEC': QuarterEnd(startingMonth=12),
# Quarterly - Calendar (Start)
- 'QS' : QuarterBegin(startingMonth=1),
- 'QS-JAN' : QuarterBegin(startingMonth=1),
- 'QS-FEB' : QuarterBegin(startingMonth=2),
- 'QS-MAR' : QuarterBegin(startingMonth=3),
- 'QS-APR' : QuarterBegin(startingMonth=4),
- 'QS-MAY' : QuarterBegin(startingMonth=5),
- 'QS-JUN' : QuarterBegin(startingMonth=6),
- 'QS-JUL' : QuarterBegin(startingMonth=7),
- 'QS-AUG' : QuarterBegin(startingMonth=8),
- 'QS-SEP' : QuarterBegin(startingMonth=9),
- 'QS-OCT' : QuarterBegin(startingMonth=10),
- 'QS-NOV' : QuarterBegin(startingMonth=11),
- 'QS-DEC' : QuarterBegin(startingMonth=12),
+ 'QS': QuarterBegin(startingMonth=1),
+ 'QS-JAN': QuarterBegin(startingMonth=1),
+ 'QS-FEB': QuarterBegin(startingMonth=2),
+ 'QS-MAR': QuarterBegin(startingMonth=3),
+ 'QS-APR': QuarterBegin(startingMonth=4),
+ 'QS-MAY': QuarterBegin(startingMonth=5),
+ 'QS-JUN': QuarterBegin(startingMonth=6),
+ 'QS-JUL': QuarterBegin(startingMonth=7),
+ 'QS-AUG': QuarterBegin(startingMonth=8),
+ 'QS-SEP': QuarterBegin(startingMonth=9),
+ 'QS-OCT': QuarterBegin(startingMonth=10),
+ 'QS-NOV': QuarterBegin(startingMonth=11),
+ 'QS-DEC': QuarterBegin(startingMonth=12),
# Quarterly - Business
- 'BQ-JAN' : BQuarterEnd(startingMonth=1),
- 'BQ-FEB' : BQuarterEnd(startingMonth=2),
- 'BQ-MAR' : BQuarterEnd(startingMonth=3),
-
- 'BQ' : BQuarterEnd(startingMonth=12),
- 'BQ-APR' : BQuarterEnd(startingMonth=4),
- 'BQ-MAY' : BQuarterEnd(startingMonth=5),
- 'BQ-JUN' : BQuarterEnd(startingMonth=6),
- 'BQ-JUL' : BQuarterEnd(startingMonth=7),
- 'BQ-AUG' : BQuarterEnd(startingMonth=8),
- 'BQ-SEP' : BQuarterEnd(startingMonth=9),
- 'BQ-OCT' : BQuarterEnd(startingMonth=10),
- 'BQ-NOV' : BQuarterEnd(startingMonth=11),
- 'BQ-DEC' : BQuarterEnd(startingMonth=12),
+ 'BQ-JAN': BQuarterEnd(startingMonth=1),
+ 'BQ-FEB': BQuarterEnd(startingMonth=2),
+ 'BQ-MAR': BQuarterEnd(startingMonth=3),
+
+ 'BQ': BQuarterEnd(startingMonth=12),
+ 'BQ-APR': BQuarterEnd(startingMonth=4),
+ 'BQ-MAY': BQuarterEnd(startingMonth=5),
+ 'BQ-JUN': BQuarterEnd(startingMonth=6),
+ 'BQ-JUL': BQuarterEnd(startingMonth=7),
+ 'BQ-AUG': BQuarterEnd(startingMonth=8),
+ 'BQ-SEP': BQuarterEnd(startingMonth=9),
+ 'BQ-OCT': BQuarterEnd(startingMonth=10),
+ 'BQ-NOV': BQuarterEnd(startingMonth=11),
+ 'BQ-DEC': BQuarterEnd(startingMonth=12),
# Quarterly - Business (Start)
- 'BQS-JAN' : BQuarterBegin(startingMonth=1),
- 'BQS' : BQuarterBegin(startingMonth=1),
- 'BQS-FEB' : BQuarterBegin(startingMonth=2),
- 'BQS-MAR' : BQuarterBegin(startingMonth=3),
- 'BQS-APR' : BQuarterBegin(startingMonth=4),
- 'BQS-MAY' : BQuarterBegin(startingMonth=5),
- 'BQS-JUN' : BQuarterBegin(startingMonth=6),
- 'BQS-JUL' : BQuarterBegin(startingMonth=7),
- 'BQS-AUG' : BQuarterBegin(startingMonth=8),
- 'BQS-SEP' : BQuarterBegin(startingMonth=9),
- 'BQS-OCT' : BQuarterBegin(startingMonth=10),
- 'BQS-NOV' : BQuarterBegin(startingMonth=11),
- 'BQS-DEC' : BQuarterBegin(startingMonth=12),
+ 'BQS-JAN': BQuarterBegin(startingMonth=1),
+ 'BQS': BQuarterBegin(startingMonth=1),
+ 'BQS-FEB': BQuarterBegin(startingMonth=2),
+ 'BQS-MAR': BQuarterBegin(startingMonth=3),
+ 'BQS-APR': BQuarterBegin(startingMonth=4),
+ 'BQS-MAY': BQuarterBegin(startingMonth=5),
+ 'BQS-JUN': BQuarterBegin(startingMonth=6),
+ 'BQS-JUL': BQuarterBegin(startingMonth=7),
+ 'BQS-AUG': BQuarterBegin(startingMonth=8),
+ 'BQS-SEP': BQuarterBegin(startingMonth=9),
+ 'BQS-OCT': BQuarterBegin(startingMonth=10),
+ 'BQS-NOV': BQuarterBegin(startingMonth=11),
+ 'BQS-DEC': BQuarterBegin(startingMonth=12),
# Weekly
- 'W-MON' : Week(weekday=0),
- 'W-TUE' : Week(weekday=1),
- 'W-WED' : Week(weekday=2),
- 'W-THU' : Week(weekday=3),
- 'W-FRI' : Week(weekday=4),
- 'W-SAT' : Week(weekday=5),
- 'W-SUN' : Week(weekday=6),
+ 'W-MON': Week(weekday=0),
+ 'W-TUE': Week(weekday=1),
+ 'W-WED': Week(weekday=2),
+ 'W-THU': Week(weekday=3),
+ 'W-FRI': Week(weekday=4),
+ 'W-SAT': Week(weekday=5),
+ 'W-SUN': Week(weekday=6),
}
_offset_to_period_map = {
- 'WEEKDAY' : 'D',
- 'EOM' : 'M',
- 'BM' : 'M',
- 'BQS' : 'Q',
- 'QS' : 'Q',
- 'BQ' : 'Q',
- 'BA' : 'A',
- 'AS' : 'A',
- 'BAS' : 'A',
- 'MS' : 'M',
- 'D' : 'D',
- 'B' : 'B',
- 'T' : 'T',
- 'S' : 'S',
- 'H' : 'H',
- 'Q' : 'Q',
- 'A' : 'A',
- 'W' : 'W',
- 'M' : 'M'
+ 'WEEKDAY': 'D',
+ 'EOM': 'M',
+ 'BM': 'M',
+ 'BQS': 'Q',
+ 'QS': 'Q',
+ 'BQ': 'Q',
+ 'BA': 'A',
+ 'AS': 'A',
+ 'BAS': 'A',
+ 'MS': 'M',
+ 'D': 'D',
+ 'B': 'B',
+ 'T': 'T',
+ 'S': 'S',
+ 'H': 'H',
+ 'Q': 'Q',
+ 'A': 'A',
+ 'W': 'W',
+ 'M': 'M'
}
need_suffix = ['QS', 'BQ', 'BQS', 'AS', 'BA', 'BAS']
@@ -276,6 +281,7 @@ def _get_freq_str(base, mult=1):
for _d in _days:
_offset_to_period_map['W-%s' % _d] = 'W-%s' % _d
+
def get_period_alias(offset_str):
""" alias to closest period strings BQ->Q etc"""
return _offset_to_period_map.get(offset_str, None)
@@ -299,25 +305,25 @@ def get_period_alias(offset_str):
'Q@JAN': 'BQ-JAN',
'Q@FEB': 'BQ-FEB',
'Q@MAR': 'BQ-MAR',
- 'Q' : 'Q-DEC',
-
- 'A' : 'A-DEC', # YearEnd(month=12),
- 'AS' : 'AS-JAN', # YearBegin(month=1),
- 'BA' : 'BA-DEC', # BYearEnd(month=12),
- 'BAS' : 'BAS-JAN', # BYearBegin(month=1),
-
- 'A@JAN' : 'BA-JAN',
- 'A@FEB' : 'BA-FEB',
- 'A@MAR' : 'BA-MAR',
- 'A@APR' : 'BA-APR',
- 'A@MAY' : 'BA-MAY',
- 'A@JUN' : 'BA-JUN',
- 'A@JUL' : 'BA-JUL',
- 'A@AUG' : 'BA-AUG',
- 'A@SEP' : 'BA-SEP',
- 'A@OCT' : 'BA-OCT',
- 'A@NOV' : 'BA-NOV',
- 'A@DEC' : 'BA-DEC',
+ 'Q': 'Q-DEC',
+
+ 'A': 'A-DEC', # YearEnd(month=12),
+ 'AS': 'AS-JAN', # YearBegin(month=1),
+ 'BA': 'BA-DEC', # BYearEnd(month=12),
+ 'BAS': 'BAS-JAN', # BYearBegin(month=1),
+
+ 'A@JAN': 'BA-JAN',
+ 'A@FEB': 'BA-FEB',
+ 'A@MAR': 'BA-MAR',
+ 'A@APR': 'BA-APR',
+ 'A@MAY': 'BA-MAY',
+ 'A@JUN': 'BA-JUN',
+ 'A@JUL': 'BA-JUL',
+ 'A@AUG': 'BA-AUG',
+ 'A@SEP': 'BA-SEP',
+ 'A@OCT': 'BA-OCT',
+ 'A@NOV': 'BA-NOV',
+ 'A@DEC': 'BA-DEC',
# lite aliases
'Min': 'T',
@@ -407,6 +413,7 @@ def to_offset(freqstr):
# hack to handle WOM-1MON
opattern = re.compile(r'([\-]?\d*)\s*([A-Za-z]+([\-@]\d*[A-Za-z]+)?)')
+
def _base_and_stride(freqstr):
"""
Return base freq and stride info from string representation
@@ -431,6 +438,7 @@ def _base_and_stride(freqstr):
return (base, stride)
+
def get_base_alias(freqstr):
"""
Returns the base frequency alias, e.g., '5D' -> 'D'
@@ -473,6 +481,7 @@ def get_offset(name):
def hasOffsetName(offset):
return offset in _offset_names
+
def get_offset_name(offset):
"""
Return rule name associated with a DateOffset object
@@ -488,6 +497,7 @@ def get_offset_name(offset):
else:
raise Exception('Bad rule given: %s!' % offset)
+
def get_legacy_offset_name(offset):
"""
Return the pre pandas 0.8.0 name for the date offset
@@ -497,6 +507,7 @@ def get_legacy_offset_name(offset):
get_offset_name = get_offset_name
+
def get_standard_freq(freq):
"""
Return the standardized frequency string
@@ -518,49 +529,49 @@ def get_standard_freq(freq):
_period_code_map = {
# Annual freqs with various fiscal year ends.
# eg, 2005 for A-FEB runs Mar 1, 2004 to Feb 28, 2005
- "A-DEC" : 1000, # Annual - December year end
- "A-JAN" : 1001, # Annual - January year end
- "A-FEB" : 1002, # Annual - February year end
- "A-MAR" : 1003, # Annual - March year end
- "A-APR" : 1004, # Annual - April year end
- "A-MAY" : 1005, # Annual - May year end
- "A-JUN" : 1006, # Annual - June year end
- "A-JUL" : 1007, # Annual - July year end
- "A-AUG" : 1008, # Annual - August year end
- "A-SEP" : 1009, # Annual - September year end
- "A-OCT" : 1010, # Annual - October year end
- "A-NOV" : 1011, # Annual - November year end
+ "A-DEC": 1000, # Annual - December year end
+ "A-JAN": 1001, # Annual - January year end
+ "A-FEB": 1002, # Annual - February year end
+ "A-MAR": 1003, # Annual - March year end
+ "A-APR": 1004, # Annual - April year end
+ "A-MAY": 1005, # Annual - May year end
+ "A-JUN": 1006, # Annual - June year end
+ "A-JUL": 1007, # Annual - July year end
+ "A-AUG": 1008, # Annual - August year end
+ "A-SEP": 1009, # Annual - September year end
+ "A-OCT": 1010, # Annual - October year end
+ "A-NOV": 1011, # Annual - November year end
# Quarterly frequencies with various fiscal year ends.
# eg, Q42005 for Q-OCT runs Aug 1, 2005 to Oct 31, 2005
- "Q-DEC" : 2000 , # Quarterly - December year end
- "Q-JAN" : 2001, # Quarterly - January year end
- "Q-FEB" : 2002, # Quarterly - February year end
- "Q-MAR" : 2003, # Quarterly - March year end
- "Q-APR" : 2004, # Quarterly - April year end
- "Q-MAY" : 2005, # Quarterly - May year end
- "Q-JUN" : 2006, # Quarterly - June year end
- "Q-JUL" : 2007, # Quarterly - July year end
- "Q-AUG" : 2008, # Quarterly - August year end
- "Q-SEP" : 2009, # Quarterly - September year end
- "Q-OCT" : 2010, # Quarterly - October year end
- "Q-NOV" : 2011, # Quarterly - November year end
-
- "M" : 3000, # Monthly
-
- "W-SUN" : 4000, # Weekly - Sunday end of week
- "W-MON" : 4001, # Weekly - Monday end of week
- "W-TUE" : 4002, # Weekly - Tuesday end of week
- "W-WED" : 4003, # Weekly - Wednesday end of week
- "W-THU" : 4004, # Weekly - Thursday end of week
- "W-FRI" : 4005, # Weekly - Friday end of week
- "W-SAT" : 4006, # Weekly - Saturday end of week
-
- "B" : 5000, # Business days
- "D" : 6000, # Daily
- "H" : 7000, # Hourly
- "T" : 8000, # Minutely
- "S" : 9000, # Secondly
+ "Q-DEC": 2000, # Quarterly - December year end
+ "Q-JAN": 2001, # Quarterly - January year end
+ "Q-FEB": 2002, # Quarterly - February year end
+ "Q-MAR": 2003, # Quarterly - March year end
+ "Q-APR": 2004, # Quarterly - April year end
+ "Q-MAY": 2005, # Quarterly - May year end
+ "Q-JUN": 2006, # Quarterly - June year end
+ "Q-JUL": 2007, # Quarterly - July year end
+ "Q-AUG": 2008, # Quarterly - August year end
+ "Q-SEP": 2009, # Quarterly - September year end
+ "Q-OCT": 2010, # Quarterly - October year end
+ "Q-NOV": 2011, # Quarterly - November year end
+
+ "M": 3000, # Monthly
+
+ "W-SUN": 4000, # Weekly - Sunday end of week
+ "W-MON": 4001, # Weekly - Monday end of week
+ "W-TUE": 4002, # Weekly - Tuesday end of week
+ "W-WED": 4003, # Weekly - Wednesday end of week
+ "W-THU": 4004, # Weekly - Thursday end of week
+ "W-FRI": 4005, # Weekly - Friday end of week
+ "W-SAT": 4006, # Weekly - Saturday end of week
+
+ "B": 5000, # Business days
+ "D": 6000, # Daily
+ "H": 7000, # Hourly
+ "T": 8000, # Minutely
+ "S": 9000, # Secondly
}
_reverse_period_code_map = {}
@@ -569,11 +580,12 @@ def get_standard_freq(freq):
# Additional aliases
_period_code_map.update({
- "Q" : 2000, # Quarterly - December year end (default quarterly)
- "A" : 1000, # Annual
- "W" : 4000, # Weekly
+ "Q": 2000, # Quarterly - December year end (default quarterly)
+ "A": 1000, # Annual
+ "W": 4000, # Weekly
})
+
def _period_alias_dictionary():
"""
Build freq alias dictionary to support freqs from original c_dates.c file
@@ -613,18 +625,18 @@ def _period_alias_dictionary():
"QTR-E", "QUARTER-E", "QUARTERLY-E"]
month_names = [
- [ "DEC", "DECEMBER" ],
- [ "JAN", "JANUARY" ],
- [ "FEB", "FEBRUARY" ],
- [ "MAR", "MARCH" ],
- [ "APR", "APRIL" ],
- [ "MAY", "MAY" ],
- [ "JUN", "JUNE" ],
- [ "JUL", "JULY" ],
- [ "AUG", "AUGUST" ],
- [ "SEP", "SEPTEMBER" ],
- [ "OCT", "OCTOBER" ],
- [ "NOV", "NOVEMBER" ] ]
+ ["DEC", "DECEMBER"],
+ ["JAN", "JANUARY"],
+ ["FEB", "FEBRUARY"],
+ ["MAR", "MARCH"],
+ ["APR", "APRIL"],
+ ["MAY", "MAY"],
+ ["JUN", "JUNE"],
+ ["JUL", "JULY"],
+ ["AUG", "AUGUST"],
+ ["SEP", "SEPTEMBER"],
+ ["OCT", "OCTOBER"],
+ ["NOV", "NOVEMBER"]]
seps = ["@", "-"]
@@ -647,13 +659,13 @@ def _period_alias_dictionary():
W_prefixes = ["W", "WK", "WEEK", "WEEKLY"]
day_names = [
- [ "SUN", "SUNDAY" ],
- [ "MON", "MONDAY" ],
- [ "TUE", "TUESDAY" ],
- [ "WED", "WEDNESDAY" ],
- [ "THU", "THURSDAY" ],
- [ "FRI", "FRIDAY" ],
- [ "SAT", "SATURDAY" ] ]
+ ["SUN", "SUNDAY"],
+ ["MON", "MONDAY"],
+ ["TUE", "TUESDAY"],
+ ["WED", "WEDNESDAY"],
+ ["THU", "THURSDAY"],
+ ["FRI", "FRIDAY"],
+ ["SAT", "SATURDAY"]]
for k in W_prefixes:
alias_dict[k] = 'W'
@@ -666,24 +678,27 @@ def _period_alias_dictionary():
return alias_dict
_reso_period_map = {
- "year" : "A",
- "quarter" : "Q",
- "month" : "M",
- "day" : "D",
- "hour" : "H",
- "minute" : "T",
- "second" : "S",
+ "year": "A",
+ "quarter": "Q",
+ "month": "M",
+ "day": "D",
+ "hour": "H",
+ "minute": "T",
+ "second": "S",
}
+
def _infer_period_group(freqstr):
return _period_group(_reso_period_map[freqstr])
+
def _period_group(freqstr):
base, mult = get_freq_code(freqstr)
return base // 1000 * 1000
_period_alias_dict = _period_alias_dictionary()
+
def _period_str_to_code(freqstr):
# hack
freqstr = _rule_aliases.get(freqstr, freqstr)
@@ -697,7 +712,6 @@ def _period_str_to_code(freqstr):
return _period_code_map[alias]
-
def infer_freq(index, warn=True):
"""
Infer the most likely frequency given the input index. If the frequency is
@@ -727,6 +741,7 @@ def infer_freq(index, warn=True):
_ONE_HOUR = 60 * _ONE_MINUTE
_ONE_DAY = 24 * _ONE_HOUR
+
class _FrequencyInferer(object):
"""
Not sure if I can avoid the state machine here
@@ -853,7 +868,7 @@ def _infer_daily_rule(self):
quarterly_rule = self._get_quarterly_rule()
if quarterly_rule:
nquarters = self.mdiffs[0] / 3
- mod_dict = {0 : 12, 2 : 11, 1 : 10}
+ mod_dict = {0: 12, 2: 11, 1: 10}
month = _month_aliases[mod_dict[self.rep_stamp.month % 3]]
return _maybe_add_count('%s-%s' % (quarterly_rule, month),
nquarters)
@@ -907,12 +922,14 @@ def _get_monthly_rule(self):
import pandas.core.algorithms as algos
+
def _maybe_add_count(base, count):
if count > 1:
return '%d%s' % (count, base)
else:
return base
+
def is_subperiod(source, target):
"""
Returns True if downsampling is possible between source and target
@@ -959,6 +976,7 @@ def is_subperiod(source, target):
elif target == 'S':
return source in ['S']
+
def is_superperiod(source, target):
"""
Returns True if upsampling is possible between source and target
@@ -1009,6 +1027,7 @@ def is_superperiod(source, target):
elif source == 'S':
return target in ['S']
+
def _get_rule_month(source, default='DEC'):
source = source.upper()
if '-' not in source:
@@ -1016,15 +1035,18 @@ def _get_rule_month(source, default='DEC'):
else:
return source.split('-')[1]
+
def _is_annual(rule):
rule = rule.upper()
return rule == 'A' or rule.startswith('A-')
+
def _quarter_months_conform(source, target):
snum = _month_numbers[source]
tnum = _month_numbers[target]
return snum % 3 == tnum % 3
+
def _is_quarterly(rule):
rule = rule.upper()
return rule == 'Q' or rule.startswith('Q-')
@@ -1043,9 +1065,9 @@ def _is_weekly(rule):
_month_numbers = dict((k, i) for i, k in enumerate(MONTHS))
-
_weekday_rule_aliases = dict((k, v) for k, v in enumerate(DAYS))
_month_aliases = dict((k + 1, v) for k, v in enumerate(MONTHS))
+
def _is_multiple(us, mult):
return us % mult == 0
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 6d5fd6f560ffe..597f3573d52dd 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -20,6 +20,7 @@
import pandas.lib as lib
import pandas._algos as _algos
+
def _utc():
import pytz
return pytz.utc
@@ -38,6 +39,7 @@ def f(self):
f.__name__ = name
return property(f)
+
def _join_i8_wrapper(joinf, with_indexers=True):
@staticmethod
def wrapper(left, right):
@@ -73,6 +75,7 @@ def wrapper(self, other):
return wrapper
+
def _ensure_datetime64(other):
if isinstance(other, np.datetime64):
return other
@@ -87,6 +90,7 @@ class TimeSeriesError(Exception):
_NS_DTYPE = np.dtype('M8[ns]')
_INT64_DTYPE = np.dtype(np.int64)
+
class DatetimeIndex(Int64Index):
"""
Immutable ndarray of datetime64 data, represented internally as int64, and
@@ -115,8 +119,8 @@ class DatetimeIndex(Int64Index):
_inner_indexer = _join_i8_wrapper(_algos.inner_join_indexer_int64)
_outer_indexer = _join_i8_wrapper(_algos.outer_join_indexer_int64)
- _left_indexer = _join_i8_wrapper(_algos.left_join_indexer_int64)
- _left_indexer_unique = _join_i8_wrapper(
+ _left_indexer = _join_i8_wrapper(_algos.left_join_indexer_int64)
+ _left_indexer_unique = _join_i8_wrapper(
_algos.left_join_indexer_unique_int64, with_indexers=False)
_arrmap = None
@@ -308,7 +312,6 @@ def _generate(cls, start, end, periods, name, offset,
else:
_normalized = _normalized and end.time() == _midnight
-
if hasattr(offset, 'delta') and offset != offsets.Day():
if inferred_tz is None and tz is not None:
# naive dates
@@ -325,7 +328,6 @@ def _generate(cls, start, end, periods, name, offset,
if end.tz is None and start.tz is not None:
end = end.tz_localize(start.tz)
-
if (offset._should_cache() and
not (offset._normalize_cache and not _normalized) and
_naive_in_cache_range(start, end)):
@@ -842,9 +844,11 @@ def _maybe_utc_convert(self, other):
if isinstance(other, DatetimeIndex):
if self.tz is not None:
if other.tz is None:
- raise Exception('Cannot join tz-naive with tz-aware DatetimeIndex')
+ raise Exception('Cannot join tz-naive with tz-aware '
+ 'DatetimeIndex')
elif other.tz is not None:
- raise Exception('Cannot join tz-naive with tz-aware DatetimeIndex')
+ raise Exception('Cannot join tz-naive with tz-aware '
+ 'DatetimeIndex')
if self.tz != other.tz:
this = self.tz_convert('UTC')
@@ -920,7 +924,7 @@ def _fast_union(self, other):
freq=left.offset)
def __array_finalize__(self, obj):
- if self.ndim == 0: # pragma: no cover
+ if self.ndim == 0: # pragma: no cover
return self.item()
self.offset = getattr(obj, 'offset', None)
@@ -978,8 +982,8 @@ def intersection(self, other):
def _partial_date_slice(self, reso, parsed):
if not self.is_monotonic:
- raise TimeSeriesError('Partial indexing only valid for ordered time'
- ' series')
+ raise TimeSeriesError('Partial indexing only valid for ordered '
+ 'time series.')
if reso == 'year':
t1 = Timestamp(datetime(parsed.year, 1, 1))
@@ -989,7 +993,7 @@ def _partial_date_slice(self, reso, parsed):
t1 = Timestamp(datetime(parsed.year, parsed.month, 1))
t2 = Timestamp(datetime(parsed.year, parsed.month, d))
elif reso == 'quarter':
- qe = (((parsed.month - 1) + 2) % 12) + 1 # two months ahead
+ qe = (((parsed.month - 1) + 2) % 12) + 1 # two months ahead
d = lib.monthrange(parsed.year, qe)[1] # at end of month
t1 = Timestamp(datetime(parsed.year, parsed.month, 1))
t2 = Timestamp(datetime(parsed.year, qe, d))
@@ -1361,7 +1365,6 @@ def indexer_between_time(self, start_time, end_time, include_start=True,
start_micros = _time_to_micros(start_time)
end_micros = _time_to_micros(end_time)
-
if include_start and include_end:
lop = rop = operator.le
elif include_start:
@@ -1523,10 +1526,10 @@ def _to_m8(key):
return np.int64(lib.pydt_to_i8(key)).view(_NS_DTYPE)
-
def _str_to_dt_array(arr, offset=None, dayfirst=None, yearfirst=None):
def parser(x):
- result = parse_time_string(x, offset, dayfirst=dayfirst, yearfirst=None)
+ result = parse_time_string(x, offset, dayfirst=dayfirst,
+ yearfirst=None)
return result[0]
arr = np.asarray(arr, dtype=object)
@@ -1535,7 +1538,7 @@ def parser(x):
_CACHE_START = Timestamp(datetime(1950, 1, 1))
-_CACHE_END = Timestamp(datetime(2030, 1, 1))
+_CACHE_END = Timestamp(datetime(2030, 1, 1))
_daterange_cache = {}
@@ -1548,13 +1551,16 @@ def _naive_in_cache_range(start, end):
return False
return _in_range(start, end, _CACHE_START, _CACHE_END)
+
def _in_range(start, end, rng_start, rng_end):
return start > rng_start and end < rng_end
+
def _time_to_micros(time):
seconds = time.hour * 60 * 60 + 60 * time.minute + time.second
return 1000000 * seconds + time.microsecond
+
def _utc_naive(dt):
if dt is None:
return dt
diff --git a/pandas/tseries/interval.py b/pandas/tseries/interval.py
index 58c16dcf08aca..104e088ee4e84 100644
--- a/pandas/tseries/interval.py
+++ b/pandas/tseries/interval.py
@@ -2,6 +2,7 @@
from pandas.core.index import Index
+
class Interval(object):
"""
Represents an interval of time defined by two timestamps
@@ -11,6 +12,7 @@ def __init__(self, start, end):
self.start = start
self.end = end
+
class PeriodInterval(object):
"""
Represents an interval of time defined by two Period objects (time ordinals)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 05861d93717e5..1e3c17b7ec5ac 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -164,7 +164,7 @@ def __sub__(self, other):
raise TypeError('Cannot subtract datetime from offset!')
elif type(other) == type(self):
return self.__class__(self.n - other.n, **self.kwds)
- else: # pragma: no cover
+ else: # pragma: no cover
raise TypeError('Cannot subtract %s from %s'
% (type(other), type(self)))
@@ -228,6 +228,7 @@ def freqstr(self):
return fstr
+
class BusinessDay(CacheableOffset, DateOffset):
"""
DateOffset subclass representing possibly n business days
@@ -343,6 +344,7 @@ def apply(self, other):
else:
raise Exception('Only know how to combine business day with '
'datetime or timedelta!')
+
@classmethod
def onOffset(cls, dt):
return dt.weekday() < 5
@@ -379,7 +381,7 @@ class MonthBegin(DateOffset, CacheableOffset):
def apply(self, other):
n = self.n
- if other.day > 1 and n <= 0: #then roll forward if n<=0
+ if other.day > 1 and n <= 0: # then roll forward if n<=0
n += 1
other = other + relativedelta(months=n, day=1)
@@ -436,7 +438,7 @@ def apply(self, other):
# as if rolled forward already
n += 1
elif other.day < first and n > 0:
- other = other + timedelta(days=first-other.day)
+ other = other + timedelta(days=first - other.day)
n -= 1
other = other + relativedelta(months=n)
@@ -525,6 +527,7 @@ def rule_code(self):
6: 'SUN'
}
+
class WeekOfMonth(DateOffset, CacheableOffset):
"""
Describes monthly dates like "the Tuesday of the 2nd week of each month"
@@ -631,7 +634,7 @@ def apply(self, other):
elif n <= 0 and other.day > lastBDay and monthsToGo == 0:
n = n + 1
- other = other + relativedelta(months=monthsToGo + 3*n, day=31)
+ other = other + relativedelta(months=monthsToGo + 3 * n, day=31)
if other.weekday() > 4:
other = other - BDay()
@@ -686,7 +689,7 @@ def apply(self, other):
monthsSince = (other.month - self.startingMonth) % 3
- if n <= 0 and monthsSince != 0: # make sure to roll forward so negate
+ if n <= 0 and monthsSince != 0: # make sure to roll forward so negate
monthsSince = monthsSince - 3
# roll forward if on same month later than first bday
@@ -697,7 +700,7 @@ def apply(self, other):
n = n - 1
# get the first bday for result
- other = other + relativedelta(months=3*n - monthsSince)
+ other = other + relativedelta(months=3 * n - monthsSince)
wkday, _ = lib.monthrange(other.year, other.month)
first = _get_firstbday(wkday)
result = datetime(other.year, other.month, first,
@@ -741,7 +744,7 @@ def apply(self, other):
if n > 0 and not (other.day >= days_in_month and monthsToGo == 0):
n = n - 1
- other = other + relativedelta(months=monthsToGo + 3*n, day=31)
+ other = other + relativedelta(months=monthsToGo + 3 * n, day=31)
return other
@@ -783,7 +786,7 @@ def apply(self, other):
# after start, so come back an extra period as if rolled forward
n = n + 1
- other = other + relativedelta(months=3*n - monthsSince, day=1)
+ other = other + relativedelta(months=3 * n - monthsSince, day=1)
return other
@property
@@ -860,18 +863,17 @@ def apply(self, other):
years = n
-
- if n > 0: # roll back first for positive n
+ if n > 0: # roll back first for positive n
if (other.month < self.month or
(other.month == self.month and other.day < first)):
years -= 1
- elif n <= 0: # roll forward
+ elif n <= 0: # roll forward
if (other.month > self.month or
(other.month == self.month and other.day > first)):
years += 1
# set first bday for result
- other = other + relativedelta(years = years)
+ other = other + relativedelta(years=years)
wkday, days_in_month = lib.monthrange(other.year, self.month)
first = _get_firstbday(wkday)
return datetime(other.year, self.month, first)
@@ -909,6 +911,7 @@ def _increment(date):
return datetime(year, self.month, days_in_month,
date.hour, date.minute, date.second,
date.microsecond)
+
def _decrement(date):
year = date.year if date.month > self.month else date.year - 1
_, days_in_month = lib.monthrange(year, self.month)
@@ -967,7 +970,7 @@ def apply(self, other):
other.microsecond)
if n <= 0:
n = n + 1
- other = other + relativedelta(years = n, day=1)
+ other = other + relativedelta(years=n, day=1)
return other
@classmethod
@@ -1039,10 +1042,12 @@ def apply(self, other):
raise TypeError('Unhandled type: %s' % type(other))
_rule_base = 'undefined'
+
@property
def rule_code(self):
return self._rule_base
+
def _delta_to_tick(delta):
if delta.microseconds == 0:
if delta.seconds == 0:
@@ -1064,6 +1069,7 @@ def _delta_to_tick(delta):
else: # pragma: no cover
return Nano(nanos)
+
def _delta_to_nanoseconds(delta):
if isinstance(delta, Tick):
delta = delta.delta
@@ -1071,6 +1077,7 @@ def _delta_to_nanoseconds(delta):
+ delta.seconds * 1000000
+ delta.microseconds) * 1000
+
class Day(Tick, CacheableOffset):
_inc = timedelta(1)
_rule_base = 'D'
@@ -1079,25 +1086,31 @@ def isAnchored(self):
return False
+
class Hour(Tick):
_inc = timedelta(0, 3600)
_rule_base = 'H'
+
class Minute(Tick):
_inc = timedelta(0, 60)
_rule_base = 'T'
+
class Second(Tick):
_inc = timedelta(0, 1)
_rule_base = 'S'
+
class Milli(Tick):
_rule_base = 'L'
+
class Micro(Tick):
_inc = timedelta(microseconds=1)
_rule_base = 'U'
+
class Nano(Tick):
_inc = 1
_rule_base = 'N'
@@ -1114,9 +1127,9 @@ def _get_firstbday(wkday):
If it's a saturday or sunday, increment first business day to reflect this
"""
first = 1
- if wkday == 5: # on Saturday
+ if wkday == 5: # on Saturday
first = 3
- elif wkday == 6: # on Sunday
+ elif wkday == 6: # on Sunday
first = 2
return first
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 88991b57d67d3..d7557e38c1680 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -28,6 +28,7 @@ def f(self):
f.__name__ = name
return property(f)
+
def _field_accessor(name, alias):
def f(self):
base, mult = _gfc(self.freq)
@@ -386,6 +387,7 @@ def strftime(self, fmt):
base, mult = _gfc(self.freq)
return plib.period_format(self.ordinal, base, fmt)
+
def _get_date_and_freq(value, freq):
value = value.upper()
dt, _, reso = parse_time_string(value, freq)
@@ -418,6 +420,7 @@ def _get_ordinals(data, freq):
else:
return lib.map_infer(data, f)
+
def dt64arr_to_periodarr(data, freq):
if data.dtype != np.dtype('M8[ns]'):
raise ValueError('Wrong dtype: %s' % data.dtype)
@@ -828,7 +831,7 @@ def get_value(self, series, key):
# if our data is higher resolution than requested key, slice
if grp < freqn:
- iv = Period(asdt, freq=(grp,1))
+ iv = Period(asdt, freq=(grp, 1))
ord1 = iv.asfreq(self.freq, how='S').ordinal
ord2 = iv.asfreq(self.freq, how='E').ordinal
@@ -836,7 +839,7 @@ def get_value(self, series, key):
raise KeyError(key)
pos = np.searchsorted(self.values, [ord1, ord2])
- key = slice(pos[0], pos[1]+1)
+ key = slice(pos[0], pos[1] + 1)
return series[key]
else:
key = Period(asdt, freq=self.freq)
@@ -993,7 +996,7 @@ def format(self, name=False):
return header + ['%s' % Period(x, freq=self.freq) for x in self]
def __array_finalize__(self, obj):
- if self.ndim == 0: # pragma: no cover
+ if self.ndim == 0: # pragma: no cover
return self.item()
self.freq = getattr(obj, 'freq', None)
@@ -1088,10 +1091,11 @@ def _get_ordinal_range(start, end, periods, freq):
data = np.arange(start.ordinal, start.ordinal + periods,
dtype=np.int64)
else:
- data = np.arange(start.ordinal, end.ordinal+1, dtype=np.int64)
+ data = np.arange(start.ordinal, end.ordinal + 1, dtype=np.int64)
return data, freq
+
def _range_from_fields(year=None, month=None, quarter=None, day=None,
hour=None, minute=None, second=None, freq=None):
if hour is None:
@@ -1131,6 +1135,7 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
return np.array(ordinals, dtype=np.int64), freq
+
def _make_field_arrays(*fields):
length = None
for x in fields:
@@ -1157,6 +1162,7 @@ def _ordinal_from_fields(year, month, quarter, day, hour, minute,
return plib.period_ordinal(year, month, day, hour, minute, second, base)
+
def _quarter_to_myear(year, quarter, freq):
if quarter is not None:
if quarter <= 0 or quarter > 4:
@@ -1179,9 +1185,11 @@ def _validate_end_alias(how):
raise ValueError('How must be one of S or E')
return how
+
def pnow(freq=None):
return Period(datetime.now(), freq=freq)
+
def period_range(start=None, end=None, periods=None, freq='D', name=None):
"""
Return a fixed frequency datetime index, with day (calendar) as the default
@@ -1206,6 +1214,7 @@ def period_range(start=None, end=None, periods=None, freq='D', name=None):
return PeriodIndex(start=start, end=end, periods=periods,
freq=freq, name=name)
+
def _period_rule_to_timestamp_rule(freq, how='end'):
how = how.lower()
if how in ('end', 'e'):
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index 6f1772dd364a6..70b36ff7ef8c7 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -26,6 +26,7 @@
#----------------------------------------------------------------------
# Plotting functions and monkey patches
+
def tsplot(series, plotf, **kwargs):
"""
Plots a Series on the given Matplotlib axes or the current axes
@@ -49,7 +50,7 @@ def tsplot(series, plotf, **kwargs):
freq = _get_freq(ax, series)
# resample against axes freq if necessary
- if freq is None: # pragma: no cover
+ if freq is None: # pragma: no cover
raise ValueError('Cannot use dynamic axis without frequency info')
else:
# Convert DatetimeIndex to PeriodIndex
@@ -74,7 +75,7 @@ def tsplot(series, plotf, **kwargs):
if style is not None:
args.append(style)
- lines = plotf(ax, *args, **kwargs)
+ lines = plotf(ax, *args, **kwargs)
label = kwargs.get('label', None)
# set date formatter, locators and rescale limits
@@ -84,14 +85,15 @@ def tsplot(series, plotf, **kwargs):
return lines
+
def _maybe_resample(series, ax, freq, plotf, kwargs):
ax_freq = _get_ax_freq(ax)
if ax_freq is not None and freq != ax_freq:
- if frequencies.is_superperiod(freq, ax_freq): # upsample input
+ if frequencies.is_superperiod(freq, ax_freq): # upsample input
series = series.copy()
series.index = series.index.asfreq(ax_freq)
freq = ax_freq
- elif _is_sup(freq, ax_freq): # one is weekly
+ elif _is_sup(freq, ax_freq): # one is weekly
how = kwargs.pop('how', 'last')
series = series.resample('D', how=how).dropna()
series = series.resample(ax_freq, how=how).dropna()
@@ -103,6 +105,7 @@ def _maybe_resample(series, ax, freq, plotf, kwargs):
raise ValueError('Incompatible frequency conversion')
return freq, ax_freq, series
+
def _get_ax_freq(ax):
ax_freq = getattr(ax, 'freq', None)
if ax_freq is None:
@@ -112,14 +115,17 @@ def _get_ax_freq(ax):
ax_freq = getattr(ax.right_ax, 'freq', None)
return ax_freq
+
def _is_sub(f1, f2):
return ((f1.startswith('W') and frequencies.is_subperiod('D', f2)) or
(f2.startswith('W') and frequencies.is_subperiod(f1, 'D')))
+
def _is_sup(f1, f2):
return ((f1.startswith('W') and frequencies.is_superperiod('D', f2)) or
(f2.startswith('W') and frequencies.is_superperiod(f1, 'D')))
+
def _upsample_others(ax, freq, plotf, kwargs):
legend = ax.get_legend()
lines, labels = _replot_ax(ax, freq, plotf, kwargs)
@@ -142,6 +148,7 @@ def _upsample_others(ax, freq, plotf, kwargs):
title = None
ax.legend(lines, labels, loc='best', title=title)
+
def _replot_ax(ax, freq, plotf, kwargs):
data = getattr(ax, '_plot_data', None)
ax._plot_data = []
@@ -162,6 +169,7 @@ def _replot_ax(ax, freq, plotf, kwargs):
return lines, labels
+
def _decorate_axes(ax, freq, kwargs):
ax.freq = freq
xaxis = ax.get_xaxis()
@@ -173,6 +181,7 @@ def _decorate_axes(ax, freq, kwargs):
ax.view_interval = None
ax.date_axis_info = None
+
def _maybe_mask(series):
mask = isnull(series)
if mask.any():
@@ -183,6 +192,7 @@ def _maybe_mask(series):
args = [series.index, series]
return args
+
def _get_freq(ax, series):
# get frequency from data
freq = getattr(series.index, 'freq', None)
@@ -205,6 +215,7 @@ def _get_freq(ax, series):
return freq
+
def _get_xlim(lines):
left, right = np.inf, -np.inf
for l in lines:
@@ -213,6 +224,7 @@ def _get_xlim(lines):
right = max(x[-1].ordinal, right)
return left, right
+
def get_datevalue(date, freq):
if isinstance(date, Period):
return date.asfreq(freq).ordinal
@@ -228,6 +240,7 @@ def get_datevalue(date, freq):
# Patch methods for subplot. Only format_dateaxis is currently used.
# Do we need the rest for convenience?
+
def format_dateaxis(subplot, freq):
"""
Pretty-formats the date axis (x-axis).
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index be5098dede15a..1fb1725f183c7 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -184,7 +184,7 @@ def _resample_timestamps(self, obj):
# Determine if we're downsampling
if axlabels.freq is not None or axlabels.inferred_freq is not None:
if len(grouper.binlabels) < len(axlabels) or self.how is not None:
- grouped = obj.groupby(grouper, axis=self.axis)
+ grouped = obj.groupby(grouper, axis=self.axis)
result = grouped.aggregate(self._agg_method)
else:
# upsampling shortcut
@@ -193,7 +193,7 @@ def _resample_timestamps(self, obj):
limit=self.limit)
else:
# Irregular data, have to use groupby
- grouped = obj.groupby(grouper, axis=self.axis)
+ grouped = obj.groupby(grouper, axis=self.axis)
result = grouped.aggregate(self._agg_method)
if self.fill_method is not None:
@@ -265,7 +265,6 @@ def _take_new_index(obj, indexer, new_index, axis=0):
raise NotImplementedError
-
def _get_range_edges(axis, offset, closed='left', base=0):
if isinstance(offset, basestring):
offset = to_offset(offset)
@@ -278,7 +277,7 @@ def _get_range_edges(axis, offset, closed='left', base=0):
closed=closed, base=base)
first, last = axis[0], axis[-1]
- if not isinstance(offset, Tick):# and first.time() != last.time():
+ if not isinstance(offset, Tick): # and first.time() != last.time():
# hack!
first = tools.normalize_date(first)
last = tools.normalize_date(last)
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index 36a9f32bd04c4..9e1c451c42887 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -17,9 +17,10 @@
dateutil.__version__ == '2.0'): # pragma: no cover
raise Exception('dateutil 2.0 incompatible with Python 2.x, you must '
'install version 1.5 or 2.1+!')
-except ImportError: # pragma: no cover
+except ImportError: # pragma: no cover
print 'Please install python-dateutil via easy_install or some method!'
- raise # otherwise a 2nd import won't show the message
+ raise # otherwise a 2nd import won't show the message
+
def _infer_tzinfo(start, end):
def _infer(a, b):
@@ -124,7 +125,6 @@ class DateParseError(ValueError):
pass
-
# patterns for quarters like '4Q2005', '05Q1'
qpat1full = re.compile(r'(\d)Q(\d\d\d\d)')
qpat2full = re.compile(r'(\d\d\d\d)Q(\d)')
@@ -132,6 +132,7 @@ class DateParseError(ValueError):
qpat2 = re.compile(r'(\d\d)Q(\d)')
ypat = re.compile(r'(\d\d\d\d)$')
+
def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
"""
Try hard to parse datetime string, leveraging dateutil plus some extra
@@ -161,7 +162,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
arg = arg.upper()
- default = datetime(1,1,1).replace(hour=0, minute=0,
+ default = datetime(1, 1, 1).replace(hour=0, minute=0,
second=0, microsecond=0)
# special handling for possibilities eg, 2Q2005, 2Q05, 2005Q1, 05Q1
@@ -239,7 +240,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
"minute", "second", "microsecond"]:
can_be_zero = ['hour', 'minute', 'second', 'microsecond']
value = getattr(parsed, attr)
- if value is not None and value != 0: # or attr in can_be_zero):
+ if value is not None and value != 0: # or attr in can_be_zero):
repl[attr] = value
if not stopped:
reso = attr
@@ -249,6 +250,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
ret = default.replace(**repl)
return ret, parsed, reso # datetime, resolution
+
def _attempt_monthly(val):
pats = ['%Y-%m', '%m-%Y', '%b %Y', '%b-%Y']
for pat in pats:
@@ -269,7 +271,7 @@ def _try_parse_monthly(arg):
add_base = True
y = int(arg[:2])
m = int(arg[2:4])
- elif len(arg) >= 6: # 201201
+ elif len(arg) >= 6: # 201201
y = int(arg[:4])
m = int(arg[4:6])
if add_base:
@@ -287,6 +289,7 @@ def format(dt):
OLE_TIME_ZERO = datetime(1899, 12, 30, 0, 0, 0)
+
def ole2datetime(oledt):
"""function for converting excel date to normal date format"""
val = float(oledt)
diff --git a/pandas/tseries/util.py b/pandas/tseries/util.py
index 4b29771233c50..0702bc40389c9 100644
--- a/pandas/tseries/util.py
+++ b/pandas/tseries/util.py
@@ -3,6 +3,7 @@
from pandas.core.frame import DataFrame
import pandas.core.nanops as nanops
+
def pivot_annual(series, freq=None):
"""
Group a series by years, taking leap years into account.
@@ -71,6 +72,7 @@ def pivot_annual(series, freq=None):
return DataFrame(values, index=years, columns=columns)
+
def isleapyear(year):
"""
Returns true if year is a leap year.
diff --git a/pandas/util/clipboard.py b/pandas/util/clipboard.py
index b2180001533bd..4136df072c6b6 100644
--- a/pandas/util/clipboard.py
+++ b/pandas/util/clipboard.py
@@ -7,6 +7,7 @@
import subprocess
import sys
+
def clipboard_get():
""" Get text from the clipboard.
"""
@@ -22,6 +23,7 @@ def clipboard_get():
pass
return tkinter_clipboard_get()
+
def clipboard_set(text):
""" Get text from the clipboard.
"""
@@ -37,6 +39,7 @@ def clipboard_set(text):
pass
xsel_clipboard_set(text)
+
def win32_clipboard_get():
""" Get the current clipboard's text on Windows.
@@ -54,6 +57,7 @@ def win32_clipboard_get():
win32clipboard.CloseClipboard()
return text
+
def osx_clipboard_get():
""" Get the clipboard's text on OS X.
"""
@@ -64,6 +68,7 @@ def osx_clipboard_get():
text = text.replace('\r', '\n')
return text
+
def tkinter_clipboard_get():
""" Get the clipboard's text using Tkinter.
@@ -83,6 +88,7 @@ def tkinter_clipboard_get():
root.destroy()
return text
+
def win32_clipboard_set(text):
# idiosyncratic win32 import issues
import pywintypes as _
@@ -94,9 +100,11 @@ def win32_clipboard_set(text):
finally:
win32clipboard.CloseClipboard()
+
def _fix_line_endings(text):
return '\r\n'.join(text.splitlines())
+
def osx_clipboard_set(text):
""" Get the clipboard's text on OS X.
"""
@@ -104,6 +112,7 @@ def osx_clipboard_set(text):
stdin=subprocess.PIPE)
p.communicate(input=text)
+
def xsel_clipboard_set(text):
from subprocess import Popen, PIPE
p = Popen(['xsel', '-bi'], stdin=PIPE)
diff --git a/pandas/util/compat.py b/pandas/util/compat.py
index 213f065523073..894f94d11a8b8 100644
--- a/pandas/util/compat.py
+++ b/pandas/util/compat.py
@@ -9,6 +9,6 @@ def product(*args, **kwds):
pools = map(tuple, args) * kwds.get('repeat', 1)
result = [[]]
for pool in pools:
- result = [x+[y] for x in result for y in pool]
+ result = [x + [y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
diff --git a/pandas/util/counter.py b/pandas/util/counter.py
index f23f6e6fbbad1..29e8906fdee38 100644
--- a/pandas/util/counter.py
+++ b/pandas/util/counter.py
@@ -8,9 +8,10 @@
try:
from collections import Mapping
except:
- # ABCs were only introduced in Python 2.6, so this is a hack for Python 2.5:
+ # ABCs were only introduced in Python 2.6, so this is a hack for Python 2.5
Mapping = dict
+
class Counter(dict):
'''Dict subclass for counting hashable items. Sometimes called a bag
or multiset. Elements are stored as dictionary keys and their counts
@@ -50,8 +51,8 @@ class Counter(dict):
in the counter until the entry is deleted or the counter is cleared:
>>> c = Counter('aaabbc')
- >>> c['b'] -= 2 # reduce the count of 'b' by two
- >>> c.most_common() # 'b' is still in, but its count is zero
+ >>> c['b'] -= 2 # reduce the count of 'b' by two
+ >>> c.most_common() # 'b' is still in, but its count is zero
[('a', 3), ('c', 1), ('b', 0)]
'''
@@ -67,10 +68,10 @@ def __init__(self, iterable=None, **kwds):
from an input iterable. Or, initialize the count from another mapping
of elements to their counts.
- >>> c = Counter() # a new, empty counter
- >>> c = Counter('gallahad') # a new counter from an iterable
- >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping
- >>> c = Counter(a=4, b=2) # a new counter from keyword args
+ >>> c = Counter() # a new, empty counter
+ >>> c = Counter('gallahad') # a new counter from an iterable
+ >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping
+ >>> c = Counter(a=4, b=2) # a new counter from keyword args
'''
super(Counter, self).__init__()
@@ -152,7 +153,8 @@ def update(self, iterable=None, **kwds):
for elem, count in iterable.iteritems():
self[elem] = self_get(elem, 0) + count
else:
- super(Counter, self).update(iterable) # fast path when counter is empty
+ # fast path when counter is empty
+ super(Counter, self).update(iterable)
else:
self_get = self.get
for elem in iterable:
@@ -195,7 +197,9 @@ def __reduce__(self):
return self.__class__, (dict(self),)
def __delitem__(self, elem):
- 'Like dict.__delitem__() but does not raise KeyError for missing values.'
+ """
+ Like dict.__delitem__() but does not raise KeyError for missing values.
+ """
if elem in self:
super(Counter, self).__delitem__(elem)
diff --git a/pandas/util/decorators.py b/pandas/util/decorators.py
index 5cd87a1e9c683..bef3ffc569df1 100644
--- a/pandas/util/decorators.py
+++ b/pandas/util/decorators.py
@@ -3,8 +3,10 @@
import sys
import warnings
+
def deprecate(name, alternative):
alt_name = alternative.func_name
+
def wrapper(*args, **kwargs):
warnings.warn("%s is deprecated. Use %s instead" % (name, alt_name),
FutureWarning)
@@ -14,6 +16,7 @@ def wrapper(*args, **kwargs):
# Substitution and Appender are derived from matplotlib.docstring (1.1.0)
# module http://matplotlib.sourceforge.net/users/license.html
+
class Substitution(object):
"""
A decorator to take a function's docstring and perform string
@@ -66,6 +69,7 @@ def from_params(cls, params):
result.params = params
return result
+
class Appender(object):
"""
A function decorator that will append an addendum to the docstring
@@ -99,12 +103,14 @@ def __call__(self, func):
func.__doc__ = ''.join(docitems)
return func
+
def indent(text, indents=1):
if not text or type(text) != str:
return ''
jointext = ''.join(['\n'] + [' '] * indents)
return jointext.join(text.split('\n'))
+
def suppress_stdout(f):
def wrapped(*args, **kwargs):
try:
@@ -120,6 +126,7 @@ class KnownFailureTest(Exception):
'''Raise this exception to mark a test as a known failing test.'''
pass
+
def knownfailureif(fail_condition, msg=None):
"""
Make function raise KnownFailureTest exception if given condition is true.
@@ -163,6 +170,7 @@ def knownfail_decorator(f):
# Local import to avoid a hard nose dependency and only incur the
# import time overhead at actual test-time.
import nose
+
def knownfailer(*args, **kwargs):
if fail_val():
raise KnownFailureTest, msg
diff --git a/pandas/util/py3compat.py b/pandas/util/py3compat.py
index 9a602155eafd8..0b00e5211daf9 100644
--- a/pandas/util/py3compat.py
+++ b/pandas/util/py3compat.py
@@ -16,6 +16,7 @@ def bytes_to_str(b, encoding='utf-8'):
# Python 2
import re
_name_re = re.compile(r"[a-zA-Z_][a-zA-Z0-9_]*$")
+
def isidentifier(s, dotted=False):
return bool(_name_re.match(s))
@@ -34,4 +35,3 @@ def bytes_to_str(b, encoding='ascii'):
from io import BytesIO
except:
from cStringIO import StringIO as BytesIO
-
diff --git a/pandas/util/terminal.py b/pandas/util/terminal.py
index 4278f35ba5019..312f54b521e90 100644
--- a/pandas/util/terminal.py
+++ b/pandas/util/terminal.py
@@ -14,28 +14,29 @@
import os
-__all__=['get_terminal_size']
+__all__ = ['get_terminal_size']
def get_terminal_size():
- import platform
- current_os = platform.system()
- tuple_xy=None
- if current_os == 'Windows':
- tuple_xy = _get_terminal_size_windows()
- if tuple_xy is None:
- tuple_xy = _get_terminal_size_tput()
- # needed for window's python in cygwin's xterm!
- if current_os == 'Linux' or \
- current_os == 'Darwin' or \
- current_os.startswith('CYGWIN'):
- tuple_xy = _get_terminal_size_linux()
- if tuple_xy is None:
- tuple_xy = (80, 25) # default value
- return tuple_xy
+ import platform
+ current_os = platform.system()
+ tuple_xy = None
+ if current_os == 'Windows':
+ tuple_xy = _get_terminal_size_windows()
+ if tuple_xy is None:
+ tuple_xy = _get_terminal_size_tput()
+ # needed for window's python in cygwin's xterm!
+ if current_os == 'Linux' or \
+ current_os == 'Darwin' or \
+ current_os.startswith('CYGWIN'):
+ tuple_xy = _get_terminal_size_linux()
+ if tuple_xy is None:
+ tuple_xy = (80, 25) # default value
+ return tuple_xy
+
def _get_terminal_size_windows():
- res=None
+ res = None
try:
from ctypes import windll, create_string_buffer
@@ -58,32 +59,36 @@ def _get_terminal_size_windows():
else:
return None
+
def _get_terminal_size_tput():
# get terminal width
# src: http://stackoverflow.com/questions/263890/how-do-i-find-the-width
# -height-of-a-terminal-window
try:
- import subprocess
- proc = subprocess.Popen(["tput", "cols"],
- stdin=subprocess.PIPE,
- stdout=subprocess.PIPE)
- output=proc.communicate(input=None)
- cols=int(output[0])
- proc=subprocess.Popen(["tput", "lines"],
- stdin=subprocess.PIPE,
- stdout=subprocess.PIPE)
- output=proc.communicate(input=None)
- rows=int(output[0])
- return (cols,rows)
+ import subprocess
+ proc = subprocess.Popen(["tput", "cols"],
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE)
+ output = proc.communicate(input=None)
+ cols = int(output[0])
+ proc = subprocess.Popen(["tput", "lines"],
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE)
+ output = proc.communicate(input=None)
+ rows = int(output[0])
+ return (cols, rows)
except:
- return None
+ return None
def _get_terminal_size_linux():
def ioctl_GWINSZ(fd):
try:
- import fcntl, termios, struct, os
- cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ,'1234'))
+ import fcntl
+ import termios
+ import struct
+ import os
+ cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
except:
return None
return cr
@@ -97,12 +102,12 @@ def ioctl_GWINSZ(fd):
pass
if not cr or cr == (0, 0):
try:
- from os import environ as env
- cr = (env['LINES'], env['COLUMNS'])
+ from os import environ as env
+ cr = (env['LINES'], env['COLUMNS'])
except:
return None
return int(cr[1]), int(cr[0])
if __name__ == "__main__":
sizex, sizey = get_terminal_size()
- print 'width =', sizex, 'height =', sizey
+ print 'width =', sizex, 'height =', sizey
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 904426731738a..866f39490aecd 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -33,17 +33,20 @@
N = 30
K = 4
+
def rands(n):
choices = string.ascii_letters + string.digits
return ''.join([random.choice(choices) for _ in xrange(n)])
+
def randu(n):
- choices = u"".join(map(unichr,range(1488,1488+26))) + string.digits
+ choices = u"".join(map(unichr, range(1488, 1488 + 26))) + string.digits
return ''.join([random.choice(choices) for _ in xrange(n)])
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Console debugging tools
+
def debug(f, *args, **kwargs):
from pdb import Pdb as OldPdb
try:
@@ -55,10 +58,12 @@ def debug(f, *args, **kwargs):
pdb = Pdb(**kw)
return pdb.runcall(f, *args, **kwargs)
+
def pudebug(f, *args, **kwargs):
import pudb
return pudb.runcall(f, *args, **kwargs)
+
def set_trace():
from IPython.core.debugger import Pdb
try:
@@ -67,17 +72,20 @@ def set_trace():
from pdb import Pdb as OldPdb
OldPdb().set_trace(sys._getframe().f_back)
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Comparators
+
def equalContents(arr1, arr2):
"""Checks if the set of unique elements of arr1 and arr2 are equivalent.
"""
return frozenset(arr1) == frozenset(arr2)
+
def isiterable(obj):
return hasattr(obj, '__iter__')
+
def assert_almost_equal(a, b):
if isinstance(a, dict) or isinstance(b, dict):
return assert_dict_equal(a, b)
@@ -109,13 +117,15 @@ def assert_almost_equal(a, b):
a, b, decimal=5, err_msg=err_msg(a, b), verbose=False)
else:
np.testing.assert_almost_equal(
- 1, a/b, decimal=5, err_msg=err_msg(a, b), verbose=False)
+ 1, a / b, decimal=5, err_msg=err_msg(a, b), verbose=False)
else:
assert(a == b)
+
def is_sorted(seq):
return assert_almost_equal(seq, np.sort(np.array(seq)))
+
def assert_dict_equal(a, b, compare_keys=True):
a_keys = frozenset(a.keys())
b_keys = frozenset(b.keys())
@@ -126,6 +136,7 @@ def assert_dict_equal(a, b, compare_keys=True):
for k in a_keys:
assert_almost_equal(a[k], b[k])
+
def assert_series_equal(left, right, check_dtype=True,
check_index_type=False,
check_index_freq=False,
@@ -144,6 +155,7 @@ def assert_series_equal(left, right, check_dtype=True,
assert(getattr(left, 'freqstr', None) ==
getattr(right, 'freqstr', None))
+
def assert_frame_equal(left, right, check_index_type=False,
check_column_type=False,
check_frame_type=False):
@@ -167,6 +179,7 @@ def assert_frame_equal(left, right, check_index_type=False,
assert(left.columns.dtype == right.columns.dtype)
assert(left.columns.inferred_type == right.columns.inferred_type)
+
def assert_panel_equal(left, right, check_panel_type=False):
if check_panel_type:
assert(type(left) == type(right))
@@ -182,102 +195,125 @@ def assert_panel_equal(left, right, check_panel_type=False):
for col in right:
assert(col in left)
+
def assert_contains_all(iterable, dic):
for k in iterable:
assert(k in dic)
+
def getCols(k):
return string.ascii_uppercase[:k]
+
def makeStringIndex(k):
return Index([rands(10) for _ in xrange(k)])
+
def makeUnicodeIndex(k):
return Index([randu(10) for _ in xrange(k)])
+
def makeIntIndex(k):
return Index(range(k))
+
def makeFloatIndex(k):
values = sorted(np.random.random_sample(k)) - np.random.random_sample(1)
return Index(values * (10 ** np.random.randint(0, 9)))
+
def makeFloatSeries():
index = makeStringIndex(N)
return Series(randn(N), index=index)
+
def makeStringSeries():
index = makeStringIndex(N)
return Series(randn(N), index=index)
+
def makeObjectSeries():
dateIndex = makeDateIndex(N)
dateIndex = Index(dateIndex, dtype=object)
index = makeStringIndex(N)
return Series(dateIndex, index=index)
+
def getSeriesData():
index = makeStringIndex(N)
return dict((c, Series(randn(N), index=index)) for c in getCols(K))
+
def makeDataFrame():
data = getSeriesData()
return DataFrame(data)
+
def getArangeMat():
return np.arange(N * K).reshape((N, K))
+
def getMixedTypeDict():
index = Index(['a', 'b', 'c', 'd', 'e'])
data = {
- 'A' : [0., 1., 2., 3., 4.],
- 'B' : [0., 1., 0., 1., 0.],
- 'C' : ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
- 'D' : bdate_range('1/1/2009', periods=5)
+ 'A': [0., 1., 2., 3., 4.],
+ 'B': [0., 1., 0., 1., 0.],
+ 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
+ 'D': bdate_range('1/1/2009', periods=5)
}
return index, data
+
def makeDateIndex(k):
- dt = datetime(2000,1,1)
+ dt = datetime(2000, 1, 1)
dr = bdate_range(dt, periods=k)
return DatetimeIndex(dr)
+
def makePeriodIndex(k):
- dt = datetime(2000,1,1)
+ dt = datetime(2000, 1, 1)
dr = PeriodIndex(start=dt, periods=k, freq='B')
return dr
+
def makeTimeSeries(nper=None):
if nper is None:
nper = N
return Series(randn(nper), index=makeDateIndex(nper))
+
def makePeriodSeries(nper=None):
if nper is None:
nper = N
return Series(randn(nper), index=makePeriodIndex(nper))
+
def getTimeSeriesData():
return dict((c, makeTimeSeries()) for c in getCols(K))
+
def makeTimeDataFrame():
data = getTimeSeriesData()
return DataFrame(data)
+
def getPeriodData():
return dict((c, makePeriodSeries()) for c in getCols(K))
+
def makePeriodFrame():
data = getPeriodData()
return DataFrame(data)
+
def makePanel():
cols = ['Item' + c for c in string.ascii_uppercase[:K - 1]]
data = dict((c, makeTimeDataFrame()) for c in cols)
return Panel.fromDict(data)
+
def add_nans(panel):
I, J, N = panel.shape
for i, item in enumerate(panel.items):
@@ -285,6 +321,7 @@ def add_nans(panel):
for j, col in enumerate(dm.columns):
dm[col][:i + j] = np.NaN
+
class TestSubDict(dict):
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
@@ -327,7 +364,7 @@ def package_check(pkg_name, version=None, app='pandas', checker=LooseVersion,
else:
msg = 'module requires %s' % pkg_name
if version:
- msg += ' with version >= %s' % (version,)
+ msg += ' with version >= %s' % (version,)
try:
mod = __import__(pkg_name)
except ImportError:
@@ -341,6 +378,7 @@ def package_check(pkg_name, version=None, app='pandas', checker=LooseVersion,
if checker(have_version) < checker(version):
raise exc_failed_check(msg)
+
def skip_if_no_package(*args, **kwargs):
"""Raise SkipTest if package_check fails
@@ -357,6 +395,8 @@ def skip_if_no_package(*args, **kwargs):
#
# Additional tags decorators for nose
#
+
+
def network(t):
"""
Label a test as requiring network connection.
@@ -411,14 +451,15 @@ def __init__(self, obj, *args, **kwds):
attrs = kwds.get("attrs", {})
for k, v in zip(args[::2], args[1::2]):
# dict comprehensions break 2.6
- attrs[k]=v
+ attrs[k] = v
self.attrs = attrs
self.obj = obj
- def __getattribute__(self,name):
+ def __getattribute__(self, name):
attrs = object.__getattribute__(self, "attrs")
obj = object.__getattribute__(self, "obj")
- return attrs.get(name, type(obj).__getattribute__(obj,name))
+ return attrs.get(name, type(obj).__getattribute__(obj, name))
+
@contextmanager
def stdin_encoding(encoding=None):
| Except for test_\* files i ran every file through pep8, and resolved most of the violations.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2086 | 2012-10-19T21:57:12Z | 2012-10-31T20:27:52Z | 2012-10-31T20:27:52Z | 2012-10-31T20:27:52Z |
Panel constructed with a None as a dict value throws an error | diff --git a/pandas/core/panel.py b/pandas/core/panel.py
old mode 100644
new mode 100755
index 211434ab07154..b802a7f86731b
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -1403,8 +1403,11 @@ def _homogenize_dict(frames, intersect=True, dtype=None):
columns = _extract_axis(adj_frames, axis=1, intersect=intersect)
for key, frame in adj_frames.iteritems():
- result[key] = frame.reindex(index=index, columns=columns,
- copy=False)
+ if frame is not None:
+ result[key] = frame.reindex(index=index, columns=columns,
+ copy=False)
+ else:
+ result[key] = None
return result, index, columns
@@ -1415,7 +1418,7 @@ def _extract_axis(data, axis=0, intersect=False):
elif len(data) > 0:
raw_lengths = []
indexes = []
-
+ index = None
have_raw_arrays = False
have_frames = False
@@ -1423,7 +1426,7 @@ def _extract_axis(data, axis=0, intersect=False):
if isinstance(v, DataFrame):
have_frames = True
indexes.append(v._get_axis(axis))
- else:
+ elif v is not None:
have_raw_arrays = True
raw_lengths.append(v.shape[axis])
@@ -1440,6 +1443,9 @@ def _extract_axis(data, axis=0, intersect=False):
else:
index = Index(np.arange(lengths[0]))
+ if index is None:
+ index = Index([])
+
return _ensure_index(index)
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
old mode 100644
new mode 100755
index 36e667322fa9d..1cf4e73e25b3f
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -730,8 +730,9 @@ def test_ctor_dict(self):
d = {'A' : itema, 'B' : itemb[5:]}
d2 = {'A' : itema._series, 'B' : itemb[5:]._series}
- d3 = {'A' : DataFrame(itema._series),
- 'B' : DataFrame(itemb[5:]._series)}
+ d3 = {'A' : None,
+ 'B' : DataFrame(itemb[5:]._series),
+ 'C' : DataFrame(itema._series)}
wp = Panel.from_dict(d)
wp2 = Panel.from_dict(d2) # nested Dict
@@ -748,6 +749,11 @@ def test_ctor_dict(self):
assert_panel_equal(Panel(d2), Panel.from_dict(d2))
assert_panel_equal(Panel(d3), Panel.from_dict(d3))
+ # a pathological case
+ d4 = { 'A' : None, 'B' : None }
+ wp4 = Panel.from_dict(d4)
+ assert_panel_equal(Panel(d4), Panel(items = ['A','B']))
+
# cast
dcasted = dict((k, v.reindex(wp.major_axis).fillna(0))
for k, v in d.iteritems())
| Python 2.7.3 (default, Jun 21 2012, 07:50:29)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
> > > import pandas
> > > print pandas.**version**
> > > 0.8.1
> > > print pandas.Panel(dict(a = None, b = pandas.DataFrame(index=[1,2,3],columns=[1,2,3])))
> > > Traceback (most recent call last):
> > > File "<stdin>", line 1, in <module>
> > > File "/usr/local/lib/python2.7/site-packages/pandas-0.8.1-py2.7-linux-x86_64.egg/pandas/core/panel.py", line 219, in **init**
> > > mgr = self._init_dict(data, passed_axes, dtype=dtype)
> > > File "/usr/local/lib/python2.7/site-packages/pandas-0.8.1-py2.7-linux-x86_64.egg/pandas/core/panel.py", line 256, in _init_dict
> > > major = _extract_axis(data, axis=0)
> > > File "/usr/local/lib/python2.7/site-packages/pandas-0.8.1-py2.7-linux-x86_64.egg/pandas/core/panel.py", line 1396, in _extract_axis
> > > raw_lengths.append(v.shape[axis])
> > > AttributeError: 'NoneType' object has no attribute 'shape'
same behavior exists in 0.9
added tests and fixes for this case (and the pathological case of constructing with dict(a=None))
| https://api.github.com/repos/pandas-dev/pandas/pulls/2075 | 2012-10-16T02:42:36Z | 2012-11-02T20:48:38Z | null | 2014-06-18T19:13:57Z |
Fix for Python 3.3 | diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 156ccbd95fa54..a2a74d50de4e1 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -25,6 +25,7 @@ def __init__(self, values, items, ref_items, ndim=2,
assert(values.ndim == ndim)
assert(len(items) == len(values))
+ self._ref_locs = None
self.values = values
self.ndim = ndim
self.items = _ensure_index(items)
@@ -39,7 +40,6 @@ def _check_integrity(self):
# monotonicity
return (self.ref_locs[1:] > self.ref_locs[:-1]).all()
- _ref_locs = None
@property
def ref_locs(self):
if self._ref_locs is None:
@@ -479,7 +479,7 @@ class BlockManager(object):
-----
This is *not* a public API class
"""
- __slots__ = ['axes', 'blocks', 'ndim']
+ __slots__ = ['axes', 'blocks']
def __init__(self, blocks, axes, do_integrity_check=True):
self.axes = [_ensure_index(ax) for ax in axes]
| There were two problems with a simple 'import pandas' on python 3.3, that have to do with slots.
For the first one, I converted the class initializer into an instance variable.
For the second one, there seems to be a property to compute 'ndim' so I simply removed it.
(I couldn't figure out quickly how to run the tests under Python 3; a number of problems are in the way.)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2066 | 2012-10-13T19:13:35Z | 2012-10-13T19:28:21Z | null | 2014-07-15T12:12:35Z |
CLN: cleanup tox.ini, remove stale pandas/setup.py, fix tox warning | diff --git a/pandas/setup.py b/pandas/setup.py
deleted file mode 100644
index f9945f0fdaab1..0000000000000
--- a/pandas/setup.py
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/usr/bin/env python
-
-import numpy
-
-def configuration(parent_package='',top_path=None):
- from numpy.distutils.misc_util import Configuration
- config = Configuration('pandas', parent_package, top_path)
- config.add_subpackage('core')
- config.add_subpackage('io')
- config.add_subpackage('rpy')
- config.add_subpackage('sandbox')
- config.add_subpackage('stats')
- config.add_subpackage('util')
- config.add_data_dir('tests')
-
- config.add_extension('_tseries',
- sources=['src/tseries.c'],
- include_dirs=[numpy.get_include()])
- config.add_extension('_sparse',
- sources=['src/sparse.c'],
- include_dirs=[numpy.get_include()])
- return config
-
-if __name__ == '__main__':
- print('This is the wrong setup.py file to run')
-
diff --git a/tox.ini b/tox.ini
index 9baf33cf8d2f9..f4e03e1677344 100644
--- a/tox.ini
+++ b/tox.ini
@@ -7,18 +7,36 @@
envlist = py25, py26, py27, py31, py32
[testenv]
-commands =
- {envpython} setup.py clean build_ext install
- {envbindir}/nosetests tests
- /bin/rm -rf {toxinidir}/build {toxinidir}/tests
deps =
cython
numpy >= 1.6.1
nose
pytz
+# cd to anything but the default {toxinidir} which
+# contains the pandas subdirectory and confuses
+# nose away from the fresh install in site-packages
+changedir = {envdir}
+
+commands =
+ # TODO: --exe because of GH #761
+ {envbindir}/nosetests --exe pandas.tests
+ # cleanup the temp. build dir created by the tox build
+ /bin/rm -rf {toxinidir}/build
+
+ # quietly rollback the install.
+ # Note this line will only be reached if the tests
+ # previous lines succeed (in particular, the tests),
+ # but an uninstall is really only required when
+ # files are removed from source tree, in which case,
+ # stale versions of files will will remain in the venv,
+ # until the next time uninstall is run.
+ #
+ # tox should provide a preinstall-commands hook.
+ pip uninstall pandas -qy
+
+
[testenv:py25]
-changedir = .tox/py25/lib/python2.5/site-packages/pandas
deps =
cython
numpy >= 1.6.1
@@ -27,13 +45,9 @@ deps =
simplejson
[testenv:py26]
-changedir = .tox/py26/lib/python2.6/site-packages/pandas
[testenv:py27]
-changedir = .tox/py27/lib/python2.7/site-packages/pandas
[testenv:py31]
-changedir = .tox/py31/lib/python3.1/site-packages/pandas
[testenv:py32]
-changedir = .tox/py32/lib/python3.2/site-packages/pandas
| I always find it hard to get Packaging issues right, i'd appreciate if someone
would back me up on pandas/setup.py being superfluous.
I made sure that setup.py install/develop still works, and that introducing
a failed test is caught by tox as you would expect,
Also see #2029 for much faster tox runs.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2063 | 2012-10-12T17:08:38Z | 2012-10-12T23:39:58Z | 2012-10-12T23:39:58Z | 2012-10-12T23:40:06Z |
BUG: fix Series repr when name is tuple holding non string-type | diff --git a/RELEASE.rst b/RELEASE.rst
index bef84c4b02150..202e3ab9d67be 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -22,6 +22,20 @@ Where to get it
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+pandas 0.10.0
+=============
+
+**Release date:** not yet released
+
+**New features**
+
+**Improvements to existing features**
+
+**Bug fixes**
+
+ - fix Series repr when name is tuple holding non string-type (#2051)
+
+
pandas 0.9.0
============
diff --git a/pandas/core/format.py b/pandas/core/format.py
index dca1976be838f..af2d95ce94360 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -8,7 +8,8 @@
except:
from io import StringIO
-from pandas.core.common import adjoin, isnull, notnull, _stringify
+from pandas.core.common import (adjoin, isnull, notnull, _stringify,
+ _stringify_seq)
from pandas.core.index import MultiIndex, _ensure_index
from pandas.util import py3compat
@@ -85,7 +86,8 @@ def _get_footer(self):
if isinstance(self.series.name, basestring):
series_name = self.series.name
elif isinstance(self.series.name, tuple):
- series_name = "('%s')" % "', '".join(self.series.name)
+ series_name = "('%s')" % "', '".join(
+ _stringify_seq(self.series.name))
else:
series_name = str(self.series.name)
else:
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 3a28401fb4f15..19a9d0fc09a63 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1011,9 +1011,16 @@ def test_repr(self):
ots[::2] = None
repr(ots)
- # tuple name, e.g. from hierarchical index
- self.series.name = ('foo', 'bar', 'baz')
- repr(self.series)
+ # various names
+ for name in ['', 1, 1.2, 'foo', u'\u03B1\u03B2\u03B3',
+ 'loooooooooooooooooooooooooooooooooooooooooooooooooooong',
+ ('foo', 'bar', 'baz'),
+ (1, 2),
+ ('foo', 1, 2.3),
+ (u'\u03B1', u'\u03B2', u'\u03B3'),
+ (u'\u03B1', 'bar')]:
+ self.series.name = name
+ repr(self.series)
biggie = Series(tm.randn(1000), index=np.arange(1000),
name=('foo', 'bar', 'baz'))
| Not sure if this is in line with latest unicode work, @y-p?
| https://api.github.com/repos/pandas-dev/pandas/pulls/2059 | 2012-10-11T19:07:25Z | 2012-10-12T06:56:57Z | null | 2012-10-12T23:37:59Z |
DOC: various small fixes | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index c6e919eb6096c..3c3c67092c8f1 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -26,7 +26,7 @@ objects. To get started, import numpy and load pandas into your namespace:
randn = np.random.randn
from pandas import *
-Here is a basic tenet to keep in mind: **data alignment is intrinsic**. Link
+Here is a basic tenet to keep in mind: **data alignment is intrinsic**. The link
between labels and data will not be broken unless done so explicitly by you.
We'll give a brief intro to the data structures, then consider all of the broad
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 1494871b6262d..508f6076f075d 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -164,7 +164,7 @@ You can also use a list of columns to create a hierarchical index:
The ``dialect`` keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
-or a :class:``python:csv.Dialect`` instance.
+or a :class:`python:csv.Dialect` instance.
.. ipython:: python
:suppress:
@@ -286,6 +286,13 @@ data columns:
index_col=0) #index is the nominal column
df
+**Note**: When passing a dict as the `parse_dates` argument, the order of
+the columns prepended is not guaranteed, because `dict` objects do not impose
+an ordering on their keys. On Python 2.7+ you may use `collections.OrderedDict`
+instead of a regular `dict` if this matters to you. Because of this, when using a
+dict for 'parse_dates' in conjunction with the `index_col` argument, it's best to
+specify `index_col` as a column label rather then as an index on the resulting frame.
+
Date Parsing Functions
~~~~~~~~~~~~~~~~~~~~~~
Finally, the parser allows you can specify a custom ``date_parser`` function to
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst
index 3e7fa29806091..f4cbbe7a074a7 100644
--- a/doc/source/visualization.rst
+++ b/doc/source/visualization.rst
@@ -327,6 +327,8 @@ for Fourier series. By coloring these curves differently for each class
it is possible to visualize data clustering. Curves belonging to samples
of the same class will usually be closer together and form larger structures.
+**Note**: The "Iris" dataset is available `here <https://raw.github.com/pydata/pandas/master/pandas/tests/data/iris.csv>`__.
+
.. ipython:: python
from pandas import read_csv
@@ -440,6 +442,8 @@ forces acting on our sample are at an equilibrium) is where a dot representing
our sample will be drawn. Depending on which class that sample belongs it will
be colored differently.
+**Note**: The "Iris" dataset is available `here <https://raw.github.com/pydata/pandas/master/pandas/tests/data/iris.csv>`__.
+
.. ipython:: python
from pandas import read_csv
| https://api.github.com/repos/pandas-dev/pandas/pulls/2056 | 2012-10-11T16:45:43Z | 2012-10-31T20:29:34Z | 2012-10-31T20:29:34Z | 2014-07-08T10:21:47Z | |
Uncentered | diff --git a/RELEASE.rst b/RELEASE.rst
index bef84c4b02150..db1928c176242 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -22,6 +22,16 @@ Where to get it
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+pandas 0.10.0
+=============
+
+***Release date:**
+
+**New features**
+
+ - Add `center` parameter to many Series, DataFrame, and rolling moments
+
+
pandas 0.9.0
============
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 91005ead01a24..f49cfec001b61 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4045,7 +4045,7 @@ def merge(self, right, how='inner', on=None, left_on=None, right_on=None,
#----------------------------------------------------------------------
# Statistical methods, etc.
- def corr(self, method='pearson'):
+ def corr(self, method='pearson', center=True):
"""
Compute pairwise correlation of columns, excluding NA/null values
@@ -4055,6 +4055,8 @@ def corr(self, method='pearson'):
pearson : standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
+ center : bool, default True
+ Whether to center the data before computing the correlation
Returns
-------
@@ -4065,7 +4067,7 @@ def corr(self, method='pearson'):
mat = numeric_df.values
if method == 'pearson':
- correl = lib.nancorr(com._ensure_float64(mat))
+ correl = lib.nancorr(com._ensure_float64(mat), center=center)
else:
mat = mat.T
corrf = nanops.get_corr_func(method)
@@ -4078,15 +4080,15 @@ def corr(self, method='pearson'):
if not valid.any():
c = np.nan
elif not valid.all():
- c = corrf(ac[valid], bc[valid])
+ c = corrf(ac[valid], bc[valid], center=center)
else:
- c = corrf(ac, bc)
+ c = corrf(ac, bc, center=center)
correl[i, j] = c
correl[j, i] = c
return self._constructor(correl, index=cols, columns=cols)
- def cov(self):
+ def cov(self, center=True):
"""
Compute pairwise covariance of columns, excluding NA/null values
@@ -4102,13 +4104,17 @@ def cov(self):
mat = numeric_df.values
if notnull(mat).all():
- baseCov = np.cov(mat.T)
+ if center:
+ baseCov = np.cov(mat.T)
+ else:
+ baseCov = np.dot(mat, mat.T) / mat.shape[0]
else:
- baseCov = lib.nancorr(com._ensure_float64(mat), cov=True)
+ baseCov = lib.nancorr(com._ensure_float64(mat), cov=True,
+ center=center)
return self._constructor(baseCov, index=cols, columns=cols)
- def corrwith(self, other, axis=0, drop=False):
+ def corrwith(self, other, axis=0, drop=False, center=True):
"""
Compute pairwise correlation between rows or columns of two DataFrame
objects.
@@ -4120,6 +4126,8 @@ def corrwith(self, other, axis=0, drop=False):
0 to compute column-wise, 1 for row-wise
drop : boolean, default False
Drop missing indices from result, default returns union of all
+ center : boolean, default True
+ Whether to center the data before computing the correlation
Returns
-------
@@ -4142,11 +4150,13 @@ def corrwith(self, other, axis=0, drop=False):
right = right.T
# demeaned data
- ldem = left - left.mean()
- rdem = right - right.mean()
+ ldem, rdem = left, right
+ if center:
+ ldem -= left.mean()
+ rdem -= right.mean()
num = (ldem * rdem).sum()
- dom = (left.count() - 1) * left.std() * right.std()
+ dom = np.sqrt((ldem**2).sum() * (rdem**2).sum())
correl = num / dom
@@ -4416,12 +4426,12 @@ def mad(self, axis=0, skipna=True, level=None):
"""
Normalized by N-1 (unbiased estimator).
""")
- def var(self, axis=0, skipna=True, level=None, ddof=1):
+ def var(self, axis=0, skipna=True, level=None, ddof=1, center=True):
if level is not None:
return self._agg_by_level('var', axis=axis, level=level,
- skipna=skipna, ddof=ddof)
+ skipna=skipna, ddof=ddof, center=center)
return self._reduce(nanops.nanvar, axis=axis, skipna=skipna,
- numeric_only=None, ddof=ddof)
+ numeric_only=None, ddof=ddof, center=center)
@Substitution(name='standard deviation', shortname='std',
na_action=_doc_exclude_na, extras='')
@@ -4429,11 +4439,12 @@ def var(self, axis=0, skipna=True, level=None, ddof=1):
"""
Normalized by N-1 (unbiased estimator).
""")
- def std(self, axis=0, skipna=True, level=None, ddof=1):
+ def std(self, axis=0, skipna=True, level=None, ddof=1, center=True):
if level is not None:
return self._agg_by_level('std', axis=axis, level=level,
- skipna=skipna, ddof=ddof)
- return np.sqrt(self.var(axis=axis, skipna=skipna, ddof=ddof))
+ skipna=skipna, ddof=ddof, center=center)
+ return np.sqrt(self.var(axis=axis, skipna=skipna, ddof=ddof,
+ center=center))
@Substitution(name='unbiased skewness', shortname='skew',
na_action=_doc_exclude_na, extras='')
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index f0f6f7b2a8c63..6709204644331 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -315,29 +315,29 @@ def median(self):
f = lambda x: x.median(axis=self.axis)
return self._python_agg_general(f)
- def std(self, ddof=1):
+ def std(self, ddof=1, center=True):
"""
Compute standard deviation of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
"""
# todo, implement at cython level?
- if ddof == 1:
+ if ddof == 1 and center:
return self._cython_agg_general('std')
else:
- f = lambda x: x.std(ddof=ddof)
+ f = lambda x: x.std(ddof=ddof, center=center)
return self._python_agg_general(f)
- def var(self, ddof=1):
+ def var(self, ddof=1, center=True):
"""
Compute variance of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
"""
- if ddof == 1:
+ if ddof == 1 and center:
return self._cython_agg_general('var')
else:
- f = lambda x: x.var(ddof=ddof)
+ f = lambda x: x.var(ddof=ddof, center=center)
return self._python_agg_general(f)
def size(self):
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index c0f2ba7654e80..66b0e7fe94ab7 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -13,15 +13,22 @@
_USE_BOTTLENECK = False
def _bottleneck_switch(bn_name, alt, zero_value=None, **kwargs):
- try:
- bn_func = getattr(bn, bn_name)
- except (AttributeError, NameError): # pragma: no cover
- bn_func = None
+ center = kwargs.pop('center', True)
+ bn_func = None
+ if not center:
+ try:
+ bn_func = getattr(bn, bn_name)
+ except (AttributeError, NameError): # pragma: no cover
+ pass
+
def f(values, axis=None, skipna=True, **kwds):
if len(kwargs) > 0:
for k, v in kwargs.iteritems():
if k not in kwds:
kwds[k] = v
+ if not center:
+ kwds['center'] = center
+
try:
if zero_value is not None and values.size == 0:
if values.ndim == 1:
@@ -119,13 +126,13 @@ def get_median(x):
else:
return get_median(values)
-def _nanvar(values, axis=None, skipna=True, ddof=1):
+def _nanvar(values, axis=None, skipna=True, ddof=1, center=True):
mask = isnull(values)
if axis is not None:
count = (values.shape[axis] - mask.sum(axis)).astype(float)
else:
- count = float(values.size - mask.sum())
+ count = np.float64(values.size - mask.sum())
if skipna:
values = values.copy()
@@ -133,7 +140,9 @@ def _nanvar(values, axis=None, skipna=True, ddof=1):
X = _ensure_numeric(values.sum(axis))
XX = _ensure_numeric((values ** 2).sum(axis))
- return np.fabs((XX - X ** 2 / count) / (count - ddof))
+ if center:
+ return np.fabs((XX - X ** 2 / count) / (count - ddof))
+ return np.fabs(XX / (count - ddof)) * (count / count)
def _nanmin(values, axis=None, skipna=True):
mask = isnull(values)
@@ -334,7 +343,7 @@ def _zero_out_fperr(arg):
else:
return 0 if np.abs(arg) < 1e-14 else arg
-def nancorr(a, b, method='pearson'):
+def nancorr(a, b, method='pearson', center=True):
"""
a, b: ndarrays
"""
@@ -349,20 +358,26 @@ def nancorr(a, b, method='pearson'):
return np.nan
f = get_corr_func(method)
- return f(a, b)
+ return f(a, b, center)
def get_corr_func(method):
if method in ['kendall', 'spearman']:
from scipy.stats import kendalltau, spearmanr
- def _pearson(a, b):
+ def _pearson(a, b, center):
+ if not center:
+ return (a*b).sum() / np.sqrt((a**2).sum() * (b**2).sum())
return np.corrcoef(a, b)[0, 1]
- def _kendall(a, b):
+ def _kendall(a, b, center):
+ if not center:
+ raise NotImplementedError()
rs = kendalltau(a, b)
if isinstance(rs, tuple):
return rs[0]
return rs
- def _spearman(a, b):
+ def _spearman(a, b, center):
+ if not center:
+ raise NotImplementedError()
return spearmanr(a, b)[0]
_cor_methods = {
@@ -372,7 +387,7 @@ def _spearman(a, b):
}
return _cor_methods[method]
-def nancov(a, b):
+def nancov(a, b, center=True):
assert(len(a) == len(b))
valid = notnull(a) & notnull(b)
@@ -383,7 +398,9 @@ def nancov(a, b):
if len(a) == 0:
return np.nan
- return np.cov(a, b)[0, 1]
+ if center:
+ return np.cov(a, b)[0, 1]
+ return (a*b).sum() / (len(a) - 1)
def _ensure_numeric(x):
if isinstance(x, np.ndarray):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 7400aa5bde2e7..450db29f01a5e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1151,29 +1151,31 @@ def max(self, axis=None, out=None, skipna=True, level=None):
@Substitution(name='standard deviation', shortname='stdev',
na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
+ @Appender(_stat_doc +
"""
Normalized by N-1 (unbiased estimator).
""")
def std(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
- level=None):
+ level=None, center=True):
if level is not None:
return self._agg_by_level('std', level=level, skipna=skipna,
- ddof=ddof)
- return np.sqrt(nanops.nanvar(self.values, skipna=skipna, ddof=ddof))
+ ddof=ddof, center=center)
+ return np.sqrt(nanops.nanvar(self.values, skipna=skipna, ddof=ddof,
+ center=center))
@Substitution(name='variance', shortname='var',
na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
+ @Appender(_stat_doc +
"""
Normalized by N-1 (unbiased estimator).
""")
def var(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
- level=None):
+ level=None, center=True):
if level is not None:
return self._agg_by_level('var', level=level, skipna=skipna,
- ddof=ddof)
- return nanops.nanvar(self.values, skipna=skipna, ddof=ddof)
+ ddof=ddof, center=center)
+ return nanops.nanvar(self.values, skipna=skipna, ddof=ddof,
+ center=center)
@Substitution(name='unbiased skewness', shortname='skew',
na_action=_doc_exclude_na, extras='')
@@ -1450,7 +1452,7 @@ def pretty_name(x):
return Series(data, index=names)
- def corr(self, other, method='pearson'):
+ def corr(self, other, method='pearson', center=True):
"""
Compute correlation two Series, excluding missing values
@@ -1461,21 +1463,26 @@ def corr(self, other, method='pearson'):
pearson : standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
+ center : boolean, default True
+ Whether to subtract the mean before computing cov and corr
Returns
-------
correlation : float
"""
this, other = self.align(other, join='inner', copy=False)
- return nanops.nancorr(this.values, other.values, method=method)
+ return nanops.nancorr(this.values, other.values, method=method,
+ center=center)
- def cov(self, other):
+ def cov(self, other, center=True):
"""
Compute covariance with Series, excluding missing values
Parameters
----------
other : Series
+ center : boolean, default True
+ Whether to subtract the mean before computing the cov
Returns
-------
@@ -1486,7 +1493,7 @@ def cov(self, other):
this, other = self.align(other, join='inner')
if len(this) == 0:
return np.nan
- return nanops.nancov(this.values, other.values)
+ return nanops.nancov(this.values, other.values, center=center)
def diff(self, periods=1):
"""
@@ -1503,15 +1510,20 @@ def diff(self, periods=1):
"""
return (self - self.shift(periods))
- def autocorr(self):
+ def autocorr(self, center=True):
"""
Lag-1 autocorrelation
+ Parameters
+ ----------
+ center : boolean, default True
+ Whether to subtract the mean before computing autocorr
+
Returns
-------
autocorr : float
"""
- return self.corr(self.shift(1))
+ return self.corr(self.shift(1), center=center)
def clip(self, lower=None, upper=None, out=None):
"""
diff --git a/pandas/src/moments.pyx b/pandas/src/moments.pyx
index 503a63c9dfec3..2316a6993cc39 100644
--- a/pandas/src/moments.pyx
+++ b/pandas/src/moments.pyx
@@ -294,7 +294,7 @@ def ewma(ndarray[double_t] input, double_t com, int adjust):
@cython.boundscheck(False)
@cython.wraparound(False)
-def nancorr(ndarray[float64_t, ndim=2] mat, cov=False):
+def nancorr(ndarray[float64_t, ndim=2] mat, cov=False, center=True):
cdef:
Py_ssize_t i, j, xi, yi, N, K
ndarray[float64_t, ndim=2] result
@@ -329,8 +329,11 @@ def nancorr(ndarray[float64_t, ndim=2] mat, cov=False):
for i in range(N):
if mask[i, xi] and mask[i, yi]:
- vx = mat[i, xi] - meanx
- vy = mat[i, yi] - meany
+ vx = mat[i, xi]
+ vy = mat[i, yi]
+ if center:
+ vx -= meanx
+ vy -= meany
sumx += vx * vy
sumxx += vx * vx
@@ -348,7 +351,8 @@ def nancorr(ndarray[float64_t, ndim=2] mat, cov=False):
#----------------------------------------------------------------------
# Rolling variance
-def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1):
+def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1, bint
+ center=1):
cdef double val, prev, sum_x = 0, sum_xx = 0, nobs = 0
cdef Py_ssize_t i
cdef Py_ssize_t N = len(input)
@@ -389,7 +393,11 @@ def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1):
output[i] = 0
continue
- output[i] = (nobs * sum_xx - sum_x * sum_x) / (nobs * (nobs - ddof))
+ if center != 1:
+ output[i] = sum_xx / (nobs - ddof)
+ else:
+ output[i] = ((nobs * sum_xx - sum_x * sum_x) /
+ (nobs * (nobs - ddof)))
else:
output[i] = NaN
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index cfdc4aa8a23ab..909cf331da715 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -153,21 +153,25 @@ def rolling_count(arg, window, freq=None, time_rule=None):
@Substitution("Unbiased moving covariance", _binary_arg_flex, _flex_retval)
@Appender(_doc_template)
-def rolling_cov(arg1, arg2, window, min_periods=None, time_rule=None):
+def rolling_cov(arg1, arg2, window, min_periods=None, time_rule=None,
+ center=True):
def _get_cov(X, Y):
mean = lambda x: rolling_mean(x, window, min_periods, time_rule)
count = rolling_count(X + Y, window, time_rule)
bias_adj = count / (count - 1)
+ if not center:
+ return mean(X * Y) * bias_adj
return (mean(X * Y) - mean(X) * mean(Y)) * bias_adj
return _flex_binary_moment(arg1, arg2, _get_cov)
@Substitution("Moving sample correlation", _binary_arg_flex, _flex_retval)
@Appender(_doc_template)
-def rolling_corr(arg1, arg2, window, min_periods=None, time_rule=None):
+def rolling_corr(arg1, arg2, window, min_periods=None, time_rule=None,
+ center=True):
def _get_corr(a, b):
- num = rolling_cov(a, b, window, min_periods, time_rule)
- den = (rolling_std(a, window, min_periods, time_rule) *
- rolling_std(b, window, min_periods, time_rule))
+ num = rolling_cov(a, b, window, min_periods, time_rule, center=center)
+ den = (rolling_std(a, window, min_periods, time_rule, center=center) *
+ rolling_std(b, window, min_periods, time_rule, center=center))
return num / den
return _flex_binary_moment(arg1, arg2, _get_corr)
@@ -197,7 +201,8 @@ def _flex_binary_moment(arg1, arg2, f):
else:
return _flex_binary_moment(arg2, arg1, f)
-def rolling_corr_pairwise(df, window, min_periods=None):
+def rolling_corr_pairwise(df, window, min_periods=None, time_rule=None,
+ center=True):
"""
Computes pairwise rolling correlation matrices as Panel whose items are
dates
@@ -220,7 +225,8 @@ def rolling_corr_pairwise(df, window, min_periods=None):
for i, k1 in enumerate(df.columns):
for k2 in df.columns[i:]:
corr = rolling_corr(df[k1], df[k2], window,
- min_periods=min_periods)
+ min_periods=min_periods, time_rule=time_rule,
+ center=center)
all_results[k1][k2] = corr
all_results[k2][k1] = corr
@@ -481,7 +487,7 @@ def call_cython(arg, window, minp):
freq=freq, time_rule=time_rule)
def rolling_apply(arg, window, func, min_periods=None, freq=None,
- time_rule=None):
+ time_rule=None, **kwargs):
"""Generic moving function application
Parameters
@@ -501,9 +507,10 @@ def rolling_apply(arg, window, func, min_periods=None, freq=None,
"""
def call_cython(arg, window, minp):
minp = _use_window(minp, window)
- return lib.roll_generic(arg, window, minp, func)
+ f = lambda x: func(x, **kwargs)
+ return lib.roll_generic(arg, window, minp, f)
return _rolling_moment(arg, window, call_cython, min_periods,
- freq=freq, time_rule=time_rule)
+ freq=freq, time_rule=time_rule, **kwargs)
def _expanding_func(func, desc, check_minp=_use_window):
diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index 05cd3227ad4e4..4aa8cfc2009c0 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -54,7 +54,8 @@ def test_rolling_min(self):
b = mom.rolling_min(a, window=100, min_periods=1)
assert_almost_equal(b, np.ones(len(a)))
- self.assertRaises(ValueError, mom.rolling_min, np.array([1,2,3]), window=3, min_periods=5)
+ self.assertRaises(ValueError, mom.rolling_min, np.array([1,2,3]),
+ window=3, min_periods=5)
def test_rolling_max(self):
self._check_moment_func(mom.rolling_max, np.max)
@@ -63,7 +64,7 @@ def test_rolling_max(self):
b = mom.rolling_max(a, window=100, min_periods=1)
assert_almost_equal(a, b)
- self.assertRaises(ValueError, mom.rolling_max, np.array([1,2,3]), window=3, min_periods=5)
+ self.assertRaises(ValueError, mom.rolling_max, np.array([1,2,3]), window=3, min_periods=5)
def test_rolling_quantile(self):
qs = [.1, .5, .9]
@@ -111,6 +112,9 @@ def test_rolling_std(self):
lambda x: np.std(x, ddof=1))
self._check_moment_func(functools.partial(mom.rolling_std, ddof=0),
lambda x: np.std(x, ddof=0))
+ self._check_moment_func(functools.partial(mom.rolling_std,
+ center=False),
+ lambda x: np.sqrt((x**2).sum() / (len(x) - 1)))
def test_rolling_std_1obs(self):
result = mom.rolling_std(np.array([1.,2.,3.,4.,5.]),
@@ -144,6 +148,9 @@ def test_rolling_var(self):
lambda x: np.var(x, ddof=1))
self._check_moment_func(functools.partial(mom.rolling_var, ddof=0),
lambda x: np.var(x, ddof=0))
+ self._check_moment_func(functools.partial(mom.rolling_var,
+ center=False),
+ lambda x: (x**2).sum() / (len(x)-1))
def test_rolling_skew(self):
try:
@@ -335,6 +342,9 @@ def test_rolling_cov(self):
result = mom.rolling_cov(A, B, 50, min_periods=25)
assert_almost_equal(result[-1], np.cov(A[-50:], B[-50:])[0, 1])
+ result = mom.rolling_cov(A, B, 50, min_periods=25, center=False)
+ assert_almost_equal(result[-1], (A[-50:] * B[-50:]).sum() / 49)
+
def test_rolling_corr(self):
A = self.series
B = A + randn(len(A))
@@ -351,6 +361,10 @@ def test_rolling_corr(self):
result = mom.rolling_corr(a, b, len(a), min_periods=1)
assert_almost_equal(result[-1], a.corr(b))
+ result = mom.rolling_corr(A, B, 50, min_periods=25, center=False)
+ assert_almost_equal(result[-1], (A[-50:] * B[-50:]).sum() /
+ np.sqrt(((A[-50:]**2).sum() * (B[-50:]**2).sum())))
+
def test_rolling_corr_pairwise(self):
panel = mom.rolling_corr_pairwise(self.frame, 10, min_periods=5)
@@ -359,6 +373,15 @@ def test_rolling_corr_pairwise(self):
10, min_periods=5)
tm.assert_series_equal(correl, exp)
+ panel = mom.rolling_corr_pairwise(self.frame, 10, min_periods=5,
+ center=False)
+
+ correl = panel.ix[:, 1, 5]
+ exp = mom.rolling_corr(self.frame[1], self.frame[5],
+ 10, min_periods=5, center=False)
+ tm.assert_series_equal(correl, exp)
+
+
def test_flex_binary_frame(self):
def _check(method):
series = self.frame[1]
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index cf37de4294f3e..63f5a863f7031 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3935,6 +3935,7 @@ def test_copy(self):
def test_corr(self):
_skip_if_no_scipy()
+ frame = self.frame.copy()
self.frame['A'][:5] = nan
self.frame['B'][:10] = nan
@@ -3947,6 +3948,13 @@ def _check_method(method='pearson'):
_check_method('kendall')
_check_method('spearman')
+ correls = self.frame.corr(center=False)
+ xp = self.frame.A.corr(self.frame.C, center=False)
+ assert_almost_equal(correls.ix['C', 'A'], xp)
+
+ demeaned = frame - frame.mean()
+ assert_frame_equal(demeaned.corr(), demeaned.corr(center=False))
+
# exclude non-numeric types
result = self.mixed_frame.corr()
expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr()
@@ -3985,6 +3993,13 @@ def test_cov(self):
assert_almost_equal(cov['A']['C'],
self.frame['A'].cov(self.frame['C']))
+ raw_cov = self.frame.cov(center=False)
+ assert_almost_equal(raw_cov.ix['C', 'A'],
+ self.frame.A.cov(self.frame.C, center=False))
+
+ demeaned = self.frame - self.frame.mean()
+ assert_frame_equal(demeaned.cov(), demeaned.cov(center=False))
+
# exclude non-numeric types
result = self.mixed_frame.cov()
expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov()
@@ -6091,14 +6106,32 @@ def test_var_std(self):
alt = lambda x: np.std(x, ddof=1)
self._check_stat_op('std', alt)
+ alt = lambda x: (x**2).sum() / (len(x) - 1)
+ self._check_stat_op('var', alt, center=False)
+
+ df = self.tsframe - self.tsframe.mean()
+ assert_almost_equal(df.var(), df.var(center=False))
+ assert_almost_equal(df.std(), df.std(center=False))
+
+ alt = lambda x: np.sqrt((x**2).sum() / (len(x) - 1))
+ self._check_stat_op('std', alt, center=False)
+
result = self.tsframe.std(ddof=4)
expected = self.tsframe.apply(lambda x: x.std(ddof=4))
assert_almost_equal(result, expected)
+ rs = self.tsframe.std(ddof=4, center=False)
+ xp = self.tsframe.apply(lambda x: x.std(ddof=4, center=False))
+ assert_almost_equal(rs, xp)
+
result = self.tsframe.var(ddof=4)
expected = self.tsframe.apply(lambda x: x.var(ddof=4))
assert_almost_equal(result, expected)
+ rs = self.tsframe.var(ddof=4, center=False)
+ xp = self.tsframe.apply(lambda x: x.var(ddof=4, center=False))
+ assert_almost_equal(rs, xp)
+
arr = np.repeat(np.random.random((1, 1000)), 1000, 0)
result = nanops.nanvar(arr, axis=0)
self.assertFalse((result < 0).any())
@@ -6108,6 +6141,9 @@ def test_var_std(self):
self.assertFalse((result < 0).any())
nanops._USE_BOTTLENECK = True
+ rs = nanops.nanvar(arr, axis=0, center=False)
+ self.assertFalse((result < 0).any())
+
def test_skew(self):
_skip_if_no_scipy()
from scipy.stats import skew
@@ -6139,7 +6175,7 @@ def alt(x):
assert_series_equal(df.kurt(), df.kurt(level=0).xs('bar'))
def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
- has_numeric_only=False):
+ has_numeric_only=False, **kwargs):
if frame is None:
frame = self.frame
# set some NAs
@@ -6158,8 +6194,8 @@ def skipna_wrapper(x):
def wrapper(x):
return alternative(x.values)
- result0 = f(axis=0, skipna=False)
- result1 = f(axis=1, skipna=False)
+ result0 = f(axis=0, skipna=False, **kwargs)
+ result1 = f(axis=1, skipna=False, **kwargs)
assert_series_equal(result0, frame.apply(wrapper))
assert_series_equal(result1, frame.apply(wrapper, axis=1),
check_dtype=False) # HACK: win32
@@ -6167,8 +6203,8 @@ def wrapper(x):
skipna_wrapper = alternative
wrapper = alternative
- result0 = f(axis=0)
- result1 = f(axis=1)
+ result0 = f(axis=0, **kwargs)
+ result1 = f(axis=1, **kwargs)
assert_series_equal(result0, frame.apply(skipna_wrapper))
assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1),
check_dtype=False)
@@ -6184,16 +6220,16 @@ def wrapper(x):
getattr(self.mixed_frame, name)(axis=1)
if has_numeric_only:
- getattr(self.mixed_frame, name)(axis=0, numeric_only=True)
- getattr(self.mixed_frame, name)(axis=1, numeric_only=True)
- getattr(self.frame, name)(axis=0, numeric_only=False)
- getattr(self.frame, name)(axis=1, numeric_only=False)
+ getattr(self.mixed_frame, name)(axis=0, numeric_only=True, **kwargs)
+ getattr(self.mixed_frame, name)(axis=1, numeric_only=True, **kwargs)
+ getattr(self.frame, name)(axis=0, numeric_only=False, **kwargs)
+ getattr(self.frame, name)(axis=1, numeric_only=False, **kwargs)
# all NA case
if has_skipna:
all_na = self.frame * np.NaN
- r0 = getattr(all_na, name)(axis=0)
- r1 = getattr(all_na, name)(axis=1)
+ r0 = getattr(all_na, name)(axis=0, **kwargs)
+ r1 = getattr(all_na, name)(axis=1, **kwargs)
self.assert_(np.isnan(r0).all())
self.assert_(np.isnan(r1).all())
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 3a28401fb4f15..047f88d457d2b 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -4,6 +4,7 @@
import os
import operator
import unittest
+import functools
import nose
@@ -1136,6 +1137,25 @@ def test_var_std(self):
expected = np.var(self.ts.values, ddof=4)
assert_almost_equal(result, expected)
+ #uncentered
+ alt = lambda x: np.sqrt((x**2).sum() / (len(x) - 1))
+ self._check_stat_op('std', alt, center=False)
+
+ alt = lambda x: (x**2).sum() / (len(x) - 1)
+ self._check_stat_op('var', alt, center=False)
+
+ rs = self.ts.std(ddof=4, center=False)
+ xp = np.sqrt((self.ts**2).sum() / (self.ts.count() - 4))
+ assert_almost_equal(rs, xp)
+
+ rs = self.ts.var(ddof=4, center=False)
+ xp = (self.ts**2).sum() / (self.ts.count() - 4)
+ assert_almost_equal(rs, xp)
+
+ demeaned = self.ts - self.ts.mean()
+ assert_almost_equal(demeaned.std(), demeaned.std(center=False))
+ assert_almost_equal(demeaned.var(), demeaned.var(center=False))
+
def test_skew(self):
_skip_if_no_scipy()
@@ -1205,11 +1225,11 @@ def test_npdiff(self):
r = np.diff(s)
assert_series_equal(Series([nan, 0, 0, 0, nan]), r)
- def _check_stat_op(self, name, alternate, check_objects=False):
+ def _check_stat_op(self, name, alternate, check_objects=False, **kwargs):
import pandas.core.nanops as nanops
def testit():
- f = getattr(Series, name)
+ f = functools.partial(getattr(Series, name), **kwargs)
# add some NaNs
self.series[5:15] = np.NaN
@@ -1792,6 +1812,18 @@ def test_corr(self):
expected, _ = stats.pearsonr(A, B)
self.assertAlmostEqual(result, expected)
+ #uncentered
+ self.assertAlmostEqual(self.ts.corr(self.ts, center=False), 1)
+ self.assertAlmostEqual(self.ts[:15].corr(self.ts[5:], center=False), 1)
+ self.assert_(np.isnan(self.ts[::2].corr(self.ts[1::2], center=False)))
+ self.assert_(isnull(cp.corr(cp)))
+
+ s1 = Series(np.random.randn(10))
+ s2 = Series(np.random.randn(10))
+ s1 -= s1.mean()
+ s2 -= s2.mean()
+ self.assertAlmostEqual(s1.corr(s2), s1.corr(s2, center=False))
+
def test_corr_rank(self):
_skip_if_no_scipy()
@@ -1831,7 +1863,8 @@ def test_cov(self):
self.assertAlmostEqual(self.ts.cov(self.ts), self.ts.std()**2)
# partial overlap
- self.assertAlmostEqual(self.ts[:15].cov(self.ts[5:]), self.ts[5:15].std()**2)
+ self.assertAlmostEqual(self.ts[:15].cov(self.ts[5:]),
+ self.ts[5:15].std()**2)
# No overlap
self.assert_(np.isnan(self.ts[::2].cov(self.ts[1::2])))
@@ -1841,6 +1874,20 @@ def test_cov(self):
cp[:] = np.nan
self.assert_(isnull(cp.cov(cp)))
+ #uncentered
+ self.assertAlmostEqual(self.ts.cov(self.ts, center=False),
+ self.ts.std(center=False)**2)
+
+ part1 = self.ts[:15]
+ part2 = self.ts[5:]
+ mid = self.ts[5:15]
+ self.assertAlmostEqual(part1.cov(part2, center=False),
+ mid.std(center=False)**2)
+
+ demeaned = self.ts - self.ts.mean()
+ assert_almost_equal(demeaned.cov(demeaned, center=False),
+ demeaned.cov(demeaned))
+
def test_copy(self):
ts = self.ts.copy()
| @wesm can I get a second pair of eyes to skim over this PR real quick? All tests pass but touches a lot of core modules.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2054 | 2012-10-10T20:25:12Z | 2014-02-16T21:55:28Z | null | 2022-10-13T00:14:40Z |
Added hourofyear for pivoting calcualtions | diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index be7193418e82f..92131dc632b66 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1150,6 +1150,7 @@ def freqstr(self):
dayofweek = _field_accessor('dayofweek', 'dow')
weekday = dayofweek
dayofyear = _field_accessor('dayofyear', 'doy')
+ hourofyear = _field_accessor('hourofyear', 'hoy')
quarter = _field_accessor('quarter', 'q')
def normalize(self):
| I do not know if this is possible. But very much needed.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2052 | 2012-10-10T15:37:19Z | 2013-07-29T05:17:57Z | null | 2022-10-13T00:14:40Z |
To csv infs | diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index a502d5c78a9ae..f514139a9170f 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -57,11 +57,11 @@ to handling missing data. While ``NaN`` is the default missing value marker for
reasons of computational speed and convenience, we need to be able to easily
detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python ``None`` will
-arise and we wish to also consider that "missing" or "null". Lastly, for legacy
-reasons ``inf`` and ``-inf`` are also considered to be "null" in
-computations. Since in NumPy divide-by-zero generates ``inf`` or ``-inf`` and
-not ``NaN``, I think you will find this is a worthwhile trade-off (Zen of
-Python: "practicality beats purity").
+arise and we wish to also consider that "missing" or "null".
+
+Until recently, for legacy reasons ``inf`` and ``-inf`` were also
+considered to be "null" in computations. This is no longer the case by
+default; use the :func: `~pandas.core.common.use_inf_as_null` function to recover it.
.. _missing.isnull:
@@ -76,8 +76,9 @@ pandas provides the :func:`~pandas.core.common.isnull` and
isnull(df2['one'])
df2['four'].notnull()
-**Summary:** ``NaN``, ``inf``, ``-inf``, and ``None`` (in object arrays) are
-all considered missing by the ``isnull`` and ``notnull`` functions.
+**Summary:** ``NaN`` and ``None`` (in object arrays) are considered
+missing by the ``isnull`` and ``notnull`` functions. ``inf`` and
+``-inf`` are no longer considered missing by default.
Calculations with missing data
------------------------------
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 668017c29c6ab..bfd8c6348d59a 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -65,6 +65,58 @@ def isnull(obj):
return _isnull_ndarraylike(obj)
else:
return obj is None
+isnull_new = isnull
+
+def isnull_old(obj):
+ '''
+ Replacement for numpy.isnan / -numpy.isfinite which is suitable
+ for use on object arrays. Treat None, NaN, INF, -INF as null.
+
+ Parameters
+ ----------
+ arr: ndarray or object value
+
+ Returns
+ -------
+ boolean ndarray or boolean
+ '''
+ if lib.isscalar(obj):
+ return lib.checknull_old(obj)
+
+ from pandas.core.generic import PandasObject
+ if isinstance(obj, np.ndarray):
+ return _isnull_ndarraylike_old(obj)
+ elif isinstance(obj, PandasObject):
+ # TODO: optimize for DataFrame, etc.
+ return obj.apply(isnull_old)
+ elif isinstance(obj, list) or hasattr(obj, '__array__'):
+ return _isnull_ndarraylike_old(obj)
+ else:
+ return obj is None
+
+def use_inf_as_null(flag):
+ '''
+ Choose which replacement for numpy.isnan / -numpy.isfinite is used.
+
+ Parameters
+ ----------
+ flag: bool
+ True means treat None, NaN, INF, -INF as null (old way),
+ False means None and NaN are null, but INF, -INF are not null
+ (new way).
+
+ Notes
+ -----
+ This approach to setting global module values is discussed and
+ approved here:
+
+ * http://stackoverflow.com/questions/4859217/programmatically-creating-variables-in-python/4859312#4859312
+ '''
+ if flag == True:
+ globals()['isnull'] = isnull_old
+ else:
+ globals()['isnull'] = isnull_new
+
def _isnull_ndarraylike(obj):
from pandas import Series
@@ -90,6 +142,30 @@ def _isnull_ndarraylike(obj):
result = -np.isfinite(obj)
return result
+def _isnull_ndarraylike_old(obj):
+ from pandas import Series
+ values = np.asarray(obj)
+
+ if values.dtype.kind in ('O', 'S', 'U'):
+ # Working around NumPy ticket 1542
+ shape = values.shape
+
+ if values.dtype.kind in ('S', 'U'):
+ result = np.zeros(values.shape, dtype=bool)
+ else:
+ result = np.empty(shape, dtype=bool)
+ vec = lib.isnullobj_old(values.ravel())
+ result[:] = vec.reshape(shape)
+
+ if isinstance(obj, Series):
+ result = Series(result, index=obj.index, copy=False)
+ elif values.dtype == np.dtype('M8[ns]'):
+ # this is the NaT pattern
+ result = values.view('i8') == lib.iNaT
+ else:
+ result = -np.isfinite(obj)
+ return result
+
def notnull(obj):
'''
Replacement for numpy.isfinite / -numpy.isnan which is suitable
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 91005ead01a24..40e959e9d81b6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1126,6 +1126,8 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
----------
path_or_buf : string or file handle / StringIO
File path
+ sep : character, default ","
+ Field delimiter for the output file.
na_rep : string, default ''
Missing data representation
float_format : string, default None
@@ -1143,12 +1145,13 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
sequence should be given if the DataFrame uses MultiIndex. If
False do not print fields for index names. Use index_label=False
for easier importing in R
+ nanRep : deprecated, use na_rep
mode : Python write mode, default 'w'
- sep : character, default ","
- Field delimiter for the output file.
encoding : string, optional
a string representing the encoding to use if the contents are
non-ascii, for python versions prior to 3
+ quoting : optional constant from csv module
+ defaults to csv.QUOTE_MINIMAL
"""
if nanRep is not None: # pragma: no cover
import warnings
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 2f699c2871cf4..2ccb09a7a123e 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -359,6 +359,8 @@ def maybe_convert_numeric(ndarray[object] values, set na_values,
if not seen_float:
if '.' in val:
seen_float = 1
+ elif 'inf' in val: # special case to handle +/-inf
+ seen_float = 1
else:
ints[i] = <int64_t> fval
diff --git a/pandas/src/sparse.pyx b/pandas/src/sparse.pyx
index 19ff2df23599e..579d473cae1b3 100644
--- a/pandas/src/sparse.pyx
+++ b/pandas/src/sparse.pyx
@@ -987,10 +987,12 @@ cdef inline float64_t __rsub(float64_t a, float64_t b):
cdef inline float64_t __div(float64_t a, float64_t b):
if b == 0:
- if a >= 0:
+ if a > 0:
return INF
- else:
+ elif a < 0:
return -INF
+ else:
+ return NaN
else:
return a / b
@@ -999,10 +1001,12 @@ cdef inline float64_t __rdiv(float64_t a, float64_t b):
cdef inline float64_t __floordiv(float64_t a, float64_t b):
if b == 0:
- if a >= 0:
+ if a > 0:
return INF
- else:
+ elif a < 0:
return -INF
+ else:
+ return NaN
else:
return a // b
diff --git a/pandas/src/tseries.pyx b/pandas/src/tseries.pyx
index 54641a78a08d9..65250c27bfd57 100644
--- a/pandas/src/tseries.pyx
+++ b/pandas/src/tseries.pyx
@@ -178,6 +178,18 @@ cdef double INF = <double> np.inf
cdef double NEGINF = -INF
cpdef checknull(object val):
+ if util.is_float_object(val) or util.is_complex_object(val):
+ return val != val and val != INF and val != NEGINF
+ elif util.is_datetime64_object(val):
+ return get_datetime64_value(val) == NPY_NAT
+ elif isinstance(val, _NaT):
+ return True
+ elif is_array(val):
+ return False
+ else:
+ return util._checknull(val)
+
+cpdef checknull_old(object val):
if util.is_float_object(val) or util.is_complex_object(val):
return val != val or val == INF or val == NEGINF
elif util.is_datetime64_object(val):
@@ -189,6 +201,7 @@ cpdef checknull(object val):
else:
return util._checknull(val)
+
def isscalar(object val):
return np.isscalar(val) or val is None or isinstance(val, _Timestamp)
@@ -206,6 +219,19 @@ def isnullobj(ndarray[object] arr):
result[i] = util._checknull(arr[i])
return result.view(np.bool_)
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def isnullobj_old(ndarray[object] arr):
+ cdef Py_ssize_t i, n
+ cdef object val
+ cdef ndarray[uint8_t] result
+
+ n = len(arr)
+ result = np.zeros(n, dtype=np.uint8)
+ for i from 0 <= i < n:
+ result[i] = util._checknull_old(arr[i])
+ return result.view(np.bool_)
+
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -223,6 +249,22 @@ def isnullobj2d(ndarray[object, ndim=2] arr):
result[i, j] = 1
return result.view(np.bool_)
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def isnullobj2d_old(ndarray[object, ndim=2] arr):
+ cdef Py_ssize_t i, j, n, m
+ cdef object val
+ cdef ndarray[uint8_t, ndim=2] result
+
+ n, m = (<object> arr).shape
+ result = np.zeros((n, m), dtype=np.uint8)
+ for i from 0 <= i < n:
+ for j from 0 <= j < m:
+ val = arr[i, j]
+ if checknull_old(val):
+ result[i, j] = 1
+ return result.view(np.bool_)
+
def list_to_object_array(list obj):
'''
Convert list to object ndarray. Seriously can't believe I had to write this
diff --git a/pandas/src/util.pxd b/pandas/src/util.pxd
index fe6e4391c59e5..5d789e73973cc 100644
--- a/pandas/src/util.pxd
+++ b/pandas/src/util.pxd
@@ -60,6 +60,15 @@ cdef inline is_array(object o):
cdef inline bint _checknull(object val):
+ import numpy as np
+ cdef double INF = <double> np.inf
+ cdef double NEGINF = -INF
+ try:
+ return bool(val is None or (val != val and val != INF and val != NEGINF))
+ except ValueError:
+ return False
+
+cdef inline bint _checknull_old(object val):
try:
return bool(val is None or val != val)
except ValueError:
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index e2b0b918f0142..753c6a721cd94 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -6,7 +6,7 @@
import unittest
from pandas import Series, DataFrame, date_range, DatetimeIndex
-from pandas.core.common import notnull, isnull
+from pandas.core.common import notnull, isnull, use_inf_as_null
import pandas.core.common as com
import pandas.util.testing as tm
@@ -18,9 +18,17 @@ def test_notnull():
assert notnull(1.)
assert not notnull(None)
assert not notnull(np.NaN)
+
+ use_inf_as_null(False)
+ assert notnull(np.inf)
+ assert notnull(-np.inf)
+
+ use_inf_as_null(True)
assert not notnull(np.inf)
assert not notnull(-np.inf)
+
+
float_series = Series(np.random.randn(5))
obj_series = Series(np.random.randn(5), dtype=object)
assert(isinstance(notnull(float_series), Series))
@@ -30,8 +38,8 @@ def test_isnull():
assert not isnull(1.)
assert isnull(None)
assert isnull(np.NaN)
- assert isnull(np.inf)
- assert isnull(-np.inf)
+ assert not isnull(np.inf)
+ assert not isnull(-np.inf)
float_series = Series(np.random.randn(5))
obj_series = Series(np.random.randn(5), dtype=object)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index cf37de4294f3e..485557db93671 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3382,6 +3382,36 @@ def test_to_csv_from_csv(self):
os.remove(path)
+ def test_to_csv_from_csv_w_some_infs(self):
+ path = '__tmp__'
+
+ # test roundtrip with inf, -inf, nan, as full columns and mix
+ self.frame['G'] = np.nan
+ self.frame['H'] = self.frame.index.map(lambda x: [np.inf, np.nan][np.random.rand() < .5])
+
+ self.frame.to_csv(path)
+ recons = DataFrame.from_csv(path)
+
+ assert_frame_equal(self.frame, recons)
+ assert_frame_equal(np.isinf(self.frame), np.isinf(recons))
+
+ os.remove(path)
+
+ def test_to_csv_from_csv_w_all_infs(self):
+ path = '__tmp__'
+
+ # test roundtrip with inf, -inf, nan, as full columns and mix
+ self.frame['E'] = np.inf
+ self.frame['F'] = -np.inf
+
+ self.frame.to_csv(path)
+ recons = DataFrame.from_csv(path)
+
+ assert_frame_equal(self.frame, recons)
+ assert_frame_equal(np.isinf(self.frame), np.isinf(recons))
+
+ os.remove(path)
+
def test_to_csv_multiindex(self):
path = '__tmp__'
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
index 9061402bb6050..a8578f67f6cec 100644
--- a/pandas/tests/test_tseries.py
+++ b/pandas/tests/test_tseries.py
@@ -290,6 +290,16 @@ def test_convert_objects():
result = lib.maybe_convert_objects(arr)
assert(result.dtype == np.object_)
+def test_convert_infs():
+ arr = np.array(['inf', 'inf', 'inf'], dtype='O')
+ result = lib.maybe_convert_numeric(arr, set(), False)
+ assert(result.dtype == np.float64)
+
+ arr = np.array(['-inf', '-inf', '-inf'], dtype='O')
+ result = lib.maybe_convert_numeric(arr, set(), False)
+ assert(result.dtype == np.float64)
+
+
def test_convert_objects_ints():
# test that we can detect many kinds of integers
dtypes = ['i1', 'i2', 'i4', 'i8', 'u1', 'u2', 'u4', 'u8']
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 904426731738a..a15be23c1c4c6 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -103,8 +103,10 @@ def assert_almost_equal(a, b):
return
if isinstance(a, (bool, float, int)):
+ if np.isinf(a):
+ assert np.isinf(b), err_msg(a,b)
# case for zero
- if abs(a) < 1e-5:
+ elif abs(a) < 1e-5:
np.testing.assert_almost_equal(
a, b, decimal=5, err_msg=err_msg(a, b), verbose=False)
else:
| This branch makes inf/-inf different from nan/None (GH #1919), and also fixes a few bugs with INF and NaN values (GH #2026, #2041).
| https://api.github.com/repos/pandas-dev/pandas/pulls/2050 | 2012-10-09T17:58:45Z | 2012-12-02T03:14:36Z | 2012-12-02T03:14:36Z | 2014-06-20T15:11:02Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.